Placeholder Content Image

Does artificial sweetener aspartame really cause cancer? What the WHO listing means for your diet soft drink habit

<p><em><a href="https://theconversation.com/profiles/evangeline-mantzioris-153250">Evangeline Mantzioris</a>, <a href="https://theconversation.com/institutions/university-of-south-australia-1180">University of South Australia</a></em></p> <p>The International Agency for Research on Cancer (IARC), which is the specialised cancer agency of the World Health Organization, has declared aspartame may be a <a href="https://www.who.int/news/item/14-07-2023-aspartame-hazard-and-risk-assessment-results-released">possible carcinogenic hazard to humans</a>.</p> <p>Another branch of the WHO, the Joint WHO and Food and Agriculture Organization’s Expert Committee on Food Additives has assessed the risk and developed recommendations on how much aspartame is safe to consume. They have recommended the acceptable daily intake be 0 to 40mg per kilo of body weight, as we currently have <a href="https://www.foodstandards.gov.au/consumer/additives/aspartame/Pages/default.aspx">in Australia</a>.</p> <p>A hazard is different to a risk. The hazard rating means it’s an agent that is capable of causing cancer; a risk measures the likelihood it could cause cancer.</p> <p>So what does this hazard assessment mean for you?</p> <h2>Firstly, what is aspartame?</h2> <p><a href="https://www.foodstandards.gov.au/consumer/additives/aspartame/Pages/default.aspx">Aspartame is an artificial sweetener</a> that is 200 times sweeter than sugar, but without any kilojoules.</p> <p>It’s used in a <a href="https://www.foodstandards.gov.au/consumer/additives/aspartame/Pages/default.aspx">variety of products</a> including carbonated drinks such as Coke Zero, Diet Coke, Pepsi Max and some home brand offerings. You can identify aspartame in drinks and foods by looking for additive number 951.</p> <p>Food products such as yogurt and confectionery may also contain aspartame, but it’s not stable at warm temperatures and thus not used in baked goods.</p> <p>Commercial names of aspartame include Equal, Nutrasweet, Canderel and Sugar Twin. In Australia the acceptable daily intake is 40mg per kilo of body weight per day, which is about 60 sachets.</p> <p><a href="https://www.fda.gov/food/food-additives-petitions/aspartame-and-other-sweeteners-food#:%7E:text=How%20many%20packets%20can%20a,based%20on%20its%20sweetness%20intensity%3F&amp;text=Notes%20About%20the%20Chart%3A,50%20mg%2Fkg%20bw%2Fd">In America</a> the acceptable daily intake has been set at 75 sachets.</p> <h2>What evidence have they used to come to this conclusion?</h2> <p><a href="https://www.who.int/news/item/14-07-2023-aspartame-hazard-and-risk-assessment-results-released">IARC looked closely</a> at the <a href="https://cdn.who.int/media/docs/default-source/nutrition-and-food-safety/july-13-final-summary-of-findings-aspartame.pdf?sfvrsn=a531e2c1_5&amp;download=true">evidence base</a> from around the world – using data from observational studies, experimental studies and animal studies.</p> <p>They found there was some limited evidence in human studies linking aspartame and cancer (specifically liver cancer) and limited evidence from animal studies as well.</p> <p>They also considered the biological mechanism studies which showed how cancer may develop from the consumption of aspartame. Usually these are lab-based studies which show exactly how exposure to the agent may lead to a cancer. In this case they found there was limited evidence for how aspartame might cause cancer.</p> <p>There were only three human studies that looked at cancer and aspartame intake. These large observational studies used the intake of soft drinks as an indicator of aspartame intake.</p> <p>All three found a positive association between artificially sweetened beverages and liver cancer in either all of the population they were studying or sub-groups within them. But these studies could not rule out other factors that may have been responsible for the findings.</p> <p>A study <a href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6284800/">conducted in Europe</a> followed 475,000 people for 11 years and found that each additional serve of diet soft drink consumed per week was linked to a 6% increased risk of liver cancer. However the scientists did conclude that due to the rarity of liver cancer they still had small numbers of people in the study.</p> <p><a href="https://pubmed.ncbi.nlm.nih.gov/35728406/">In a study from the US</a>, increased risk of liver cancer was seen in people with diabetes who drank more than two or more cans of a diet soda a week.</p> <p>The <a href="https://aacrjournals.org/cebp/article/31/10/1907/709398/Sugar-and-Artificially-Sweetened-Beverages-and">third study</a>, also from the US, found an increase in liver cancer risk in men who never smoked and drank two or more artificially sweetened drinks a day.</p> <p>From this they have decided to declare aspartame as a Group 2b “possible carcinogen”. But they have also said more and better research is needed to further understand the relationship between aspartame and cancer.</p> <p>IARC has four categories (groupings) available for potential substances (or as they are referred to by IARC, “agents”) that may cause cancer.</p> <h2>What does each grouping mean?</h2> <p><strong>Group 1 Carcinogenic to humans:</strong> an agent in this group is carcinogenic, which means there is convincing evidence from human studies and we know precisely <em>how</em> it causes cancer. There are 126 agents in this group, including tobacco smoking, alcohol, processed meat, radiation and ionising radiation.</p> <p><strong>Group 2a Probably carcinogenic to humans:</strong> there are positive associations between the agent and cancer in humans, but there may still be other explanations for the association which were not fully examined in the studies. There are 95 agents in this group, including red meat, DDT insecticide and night shift work.</p> <p><strong>Group 2b Possibly carcinogenic in humans:</strong> this means limited evidence of causing cancer in humans, but sufficient evidence from animal studies, or the mechanism of how the agent may be carcinogenic is well understood. This basically means the current evidence indicates an agent may possibly be carcinogenic, but more scientific evidence from better conducted studies is needed. There are now <a href="https://monographs.iarc.who.int/agents-classified-by-the-iarc/">323</a> agents in this group, including aloe vera (whole leaf extract), ginkgo biloba and lead.</p> <p><strong>Group 3 Not classifiable as a carcinogen:</strong> there’s not enough evidence from humans or animals, and there is limited mechanistic evidence of how it may be a carcinogen. There are 500 agents in this group.</p> <h2>So do I have to give up my diet soft drink habit?</h2> <p>For a 70kg person you would need to consume about 14 cans (over 5 litres) of soft drink sweetened with aspartame a day to reach the acceptable daily intake.</p> <p>But we need to remember there may also be aspartame added in other foods consumed. So this is an unrealistic amount to consume, but not impossible.</p> <p>We also need to consider all the evidence on aspartame together. The foods we typically see aspartame in are processed or ultra-processed, which have recently also been <a href="https://theconversation.com/ultra-processed-foods-are-trashing-our-health-and-the-planet-180115">shown to be detrimental to health</a>.</p> <p>And artificial sweeteners (including aspartame) <a href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2892765/#!po=59.3750">can make people crave more sugar</a>, making them want to eat more food, potentially causing them to gain more weight.</p> <p>All together, this indicates we should be more careful about the amount of artificial sweeteners we consume, since they <a href="https://theconversation.com/the-who-says-we-shouldnt-bother-with-artificial-sweeteners-for-weight-loss-or-health-is-sugar-better-205827">do not provide any health benefits</a>, and have possible adverse effects.</p> <p>But overall, from this evidence, drinking the occasional or even daily can of a diet drink is safe and probably not a cancer risk.</p> <hr /> <p><em>Correction: this article originally stated each serve of soft drink in a study was linked to a 6% increased risk of liver cancer, however it was each additional serve per week. This has been amended.<img style="border: none !important; box-shadow: none !important; margin: 0 !important; max-height: 1px !important; max-width: 1px !important; min-height: 1px !important; min-width: 1px !important; opacity: 0 !important; outline: none !important; padding: 0 !important;" src="https://counter.theconversation.com/content/208844/count.gif?distributor=republish-lightbox-basic" alt="The Conversation" width="1" height="1" /></em></p> <p><em><a href="https://theconversation.com/profiles/evangeline-mantzioris-153250">Evangeline Mantzioris</a>, Program Director of Nutrition and Food Sciences, Accredited Practising Dietitian, <a href="https://theconversation.com/institutions/university-of-south-australia-1180">University of South Australia</a></em></p> <p><em>Image credits: Getty Images </em></p> <p><em>This article is republished from <a href="https://theconversation.com">The Conversation</a> under a Creative Commons license. Read the <a href="https://theconversation.com/does-artificial-sweetener-aspartame-really-cause-cancer-what-the-who-listing-means-for-your-diet-soft-drink-habit-208844">original article</a>.</em></p>

Body

Placeholder Content Image

The 12 smartest cat breeds that are equally cute and clever

<h2> </h2> <h2>How smart is your cat?</h2> <p style="font-size: medium; font-weight: 400;">Cats are delightfully complex creatures. If we dare to sleep in a few minutes late, they paw at our faces and meow, demanding breakfast. They can be warm and affectionate yet aloof when we’ve been away from the house too long. Even some of the smartest cat breeds display unusual cat behaviour.  But there’s no need for standardised tests to verify what we already know – cats are smart! Whether they’re mixed breed or purebred, small cat breeds or large cat breeds, the reality is that there’s no one accurate way to measure the intelligence of individual cats. However, recent research gives us some compelling evidence to back up what we know in our hearts: feline intelligence is unique.</p> <p style="font-size: medium; font-weight: 400;">Are you clawing to find out which cat breeds are the smartest? Do they happen to be sleek black cat breeds, gorgeous orange cat breeds or all of the above? Experts say the ones on our list stand out when it comes to their trainability, insatiable curiosity, investigative skills and puzzle-solving brain power.</p> <h2>Do cats have a high IQ?</h2> <p style="font-size: medium; font-weight: 400;">Before we reveal the smartest cat breeds, let’s take a closer look at just how clever these little lions are. We know that a cat’s brain is almost as structurally complex as a human brain. Cats have around 250 million neurons (tiny information processors) in their cerebral cortex, the part of the brain that solves problems, makes decisions, decodes emotions and creates complex behaviour, like why cats purr or why cats sleep so much. (In comparison, dogs have about 429 million neurons, and humans house an average of 86 billion.) And while more neurons in the brain does equal more cognitive ability, it isn’t necessarily a good indicator of intelligence. That’s because cognition can involve other areas outside the cerebral cortex.</p> <p style="font-size: medium; font-weight: 400;">So why are dogs generally thought to be smarter than cats? Is it because they have more neurons? Nerdy science aside, there are a host of theories. For starters, dogs have been domesticated for thousands of years and have been living and learning social tasks from humans longer than cats. Temperament wise, dogs are more patient and generally eager to please their humans. In contrast, cats are typically less eager to please, though some are exceptionally cooperative. They tend to be more impulsive, have far less patience and get frustrated and lose interest in something that’s boring to them.</p> <p style="font-size: medium; font-weight: 400;">However, cats are highly attuned to their surroundings, and how they interact and respond is expressing intelligence, says Teresa Keiger, an all-breed judge with the Cat Fanciers’ Association. That awareness is what helped cats survive for thousands of years in the wild. “I notice that cats who were rescued from outdoor living situations tend to be more intelligent, since they’ve had to learn to think on their feet,” says veterinarian, Dr Stephanie Wolf. Whether a mixed breed or pedigree, rare cat breed or fluffy cat breed, one thing is certain: cats are smart and trainable; they just might not all be interested.</p> <h2>1. Russian blue</h2> <p style="font-size: medium; font-weight: 400;">When it comes to the smartest cat breeds, the Russian blue is so clever that it’s more apt to train you than the other way around. Like an alarm, the Russian blue will wake you up to feed it breakfast and remind you when it’s dinnertime. In fact, if you’re looking for an accountability partner to maintain a strict schedule, this might be the cat for you. “This quiet breed is very attuned to its household,” says Keiger. “They’re incredibly smart, and they wait to make certain that any stranger is not a threat to safety.” Once they’ve issued your security clearance, they form a tight bond and are regarded as an affectionate cat breed with their humans – so much so that they’re known for hitching a ride on their human’s shoulders.</p> <table style="font-size: medium; font-weight: 400;"> <tbody> <tr> <td colspan="2">Breed overview</td> </tr> <tr> <td colspan="2">Russian blue</td> </tr> <tr> <td>Height</td> <td>25 centimetres</td> </tr> <tr> <td>Weight</td> <td>3–7 kilograms</td> </tr> <tr> <td>Life expectancy</td> <td>15–20 years</td> </tr> </tbody> </table> <h2>Abyssinian</h2> <p style="font-size: medium; font-weight: 400;">This gorgeous cat looks like it stepped out of the jungle and into your living room. From the forward-tilting ears to the large almond-shaped eyes and the stunning colours of its coat, it resembles a cougar. “Abyssinians are incredibly intelligent, good problem solvers and full of an insatiable curiosity,” says Keiger.</p> <p style="font-size: medium; font-weight: 400;">Perpetually alert and busy, the Aby is happiest when patrolling its environment and playing with challenging interactive puzzle toys. “I always think of Abys as the MacGyver of cats – if they had thumbs, they’d figure out how to fix anything,” Keiger says. Intelligence aside, Abys are highly social cats and love people and other felines. Plus, they are one of the cat breeds that gets along with dogds.   Who knows? Maybe the Aby could teach your old dog a few new tricks.</p> <table style="font-size: medium; font-weight: 400;"> <tbody> <tr> <td colspan="2">Breed overview</td> </tr> <tr> <td colspan="2">Abyssinian</td> </tr> <tr> <td>Height</td> <td>30–40 centimetres</td> </tr> <tr> <td>Weight</td> <td>3–5 kilograms</td> </tr> <tr> <td>Life expectancy</td> <td>9–15 years</td> </tr> </tbody> </table> <h2>3. Egyptian mau</h2> <p style="font-size: medium; font-weight: 400;">The key to this exotic beauty’s happiness is sharpening its mental and physical skills. “Being able to offer enrichment is key to ensuring your cat is getting the best level of stimulation and exercise,” says veterinarian, Dr Julie Andino. That goes for all breeds, but this cat craves cat toys and activities that showcase its lightning-fast physical and mental responses. They’re so clever that they can even turn on the faucet to play in water – although we may never understand why some cats hate water when the mau wouldn’t miss an opportunity to splash their paws in it. After they’ve expended their energy figuring out the day’s puzzles, this cutie loves to snuggle up with their human.</p> <table style="font-size: medium; font-weight: 400;"> <tbody> <tr> <td colspan="2">Breed overview</td> </tr> <tr> <td colspan="2">Egyptian mau</td> </tr> <tr> <td>Height</td> <td>17–28 centimetres</td> </tr> <tr> <td>Weight</td> <td>4–6 kilograms</td> </tr> <tr> <td>Life expectancy</td> <td>9–13 years</td> </tr> </tbody> </table> <h2>4. Burmese</h2> <p style="font-size: medium; font-weight: 400;">One of the smartest cat breeds, the Burmese craves attention, something you can learn from its body language.  “This intelligent breed loves to entertain its resident humans so much that it often checks to make certain someone is watching,” says Keiger. They’re also known for being dog-like and enjoy a rousing game of fetch, an unusually quirky cat behaviour. And they’re adorably stubborn. “When they make up their minds that they want something, they simply don’t take no for an answer and usually figure out a way to get it.” And that includes attention from you. Burmese cats are all about give-and-take when it comes to affection, but if you’re busy and ignore them too long, they might take it upon themselves to follow you around the house, rub against your leg  or plop down on your lap and snuggle, all to remind you that you have a cat that needs some loving.</p> <table style="font-size: medium; font-weight: 400;"> <tbody> <tr> <td colspan="2">Breed overview</td> </tr> <tr> <td colspan="2">Burmese</td> </tr> <tr> <td>Height</td> <td>25–30 centimetres</td> </tr> <tr> <td>Weight</td> <td>4–6 kilograms</td> </tr> <tr> <td>Life expectancy</td> <td>9–13 years</td> </tr> </tbody> </table> <h2>5. American bobtail</h2> <p style="font-size: medium; font-weight: 400;">It’s one thing for the smartest cat breeds to learn new tricks, but when a cat also has emotional intelligence, that’s an impressive combo. These cute stubby-tailed felines are noted for their empathy and for providing a calming and assuring presence that’s equal to emotional support dogs. “They are also very in tune with their household and owners, offering a shoulder to cry on when needed,” says Keiger.</p> <p style="font-size: medium; font-weight: 400;">They even act like dogs – playing fetch, walking on a leash and rushing to greet guests when there’s a knock on the door. Devoted companion, a lover of people and other animals, the American bobtail is an adorable and lovable companion.</p> <table style="font-size: medium; font-weight: 400;"> <tbody> <tr> <td colspan="2">Breed overview</td> </tr> <tr> <td colspan="2">American bobtail</td> </tr> <tr> <td>Height</td> <td>22–25 centimetres</td> </tr> <tr> <td>Weight</td> <td>3–7 kilograms</td> </tr> <tr> <td>Life expectancy</td> <td>13–15 years</td> </tr> </tbody> </table> <h2>6. Japanese bobtail</h2> <p style="font-size: medium; font-weight: 400;">The smartest cat breeds are often breeds we have never heard of before. Take the Japanese bobtail, one of the rarest cat breeds in the world. Every Japanese bobtail has its own unique tail. Yes, you read that right. No two tails are ever alike. They consider themselves family members and are always ready to help, even if that means sitting on your sitting on your laptop. “They are active, intelligent, talkative cats who delight in mischief-making,” says Keiger. They love to travel, stay in hotels and quite literally jump through hoops and over hurdles to impress you – and entertain themselves. As brain power goes, it’s that human-like personality that makes them seem so bright. “Life is never dull with a Japanese bobtail,” Keiger says.</p> <table style="font-size: medium; font-weight: 400;"> <tbody> <tr> <td colspan="2">Breed overview</td> </tr> <tr> <td colspan="2">Japanese bobtail</td> </tr> <tr> <td>Height</td> <td>20–23 centimetres</td> </tr> <tr> <td>Weight</td> <td>3–5 kilograms</td> </tr> <tr> <td>Life expectancy</td> <td>15–18 years</td> </tr> </tbody> </table> <h2>7. Siamese</h2> <p style="font-size: medium; font-weight: 400;">The Siamese is wicked smart and loves to learn new tricks, Dr Andino says. If you don’t provide interesting and challenging outlets to exercise its noggin, it will find its own stimulating activities, whether you approve or not. If there’s one thing that competes with utilising its brain power, it’s the love and affection it craves from humans. If this cat had a daily schedule, “get affection from human” would be a top priority. And Siamese cats will let you know by that infamous yowling. “The Siamese are very vocal and communicative with their human,” says Dr Andino. They’re likely to talk your ear off, especially if they want something. One of the smartest cat breeds, the Siamese gets along well with people of all ages, as well as other animals. Bonus: if you take any stock in choosing cats most compatible with your zodiac sign, the Siamese happens to be very compatible with Libras.</p> <table style="font-size: medium; font-weight: 400;"> <tbody> <tr> <td colspan="2">Breed overview</td> </tr> <tr> <td colspan="2">Siamese</td> </tr> <tr> <td>Height</td> <td>20–25 centimetres</td> </tr> <tr> <td>Weight</td> <td>3–7 kilograms</td> </tr> <tr> <td>Life expectancy</td> <td>15–20 years</p> </td> </tr> </tbody> </table> <h2>8. Bengal</h2> <p style="font-size: medium; font-weight: 400;">The Bengal sports a jaw-dropping, highly contrasted coat of distinctive marbling – very similar to what you see on leopards and jaguars. Its striking beauty is why you should keep close tabs on your Bengal, as it’s the cat breed most often stolen. Beauty aside, this very confident and curious cat isn’t shy about asking you to play. Bengals tend to get a little set in their ways, so introducing new people and furry friends should be done at an early age, if possible. Need to lay down a few new house rules or teach it some tricks? No problem. Bengals pick those up lickety-split. Their athletic prowess is unmatched, but they need plenty of space to run, pounce, roam and jump – some even love to walk on a leash and explore the outdoors. Bengals are super sweet and often very chatty (here’s what their meows may mean) and happy to engage you in a conversation.</p> <table style="font-size: medium; font-weight: 400;"> <tbody> <tr> <td colspan="2">Breed overview</td> </tr> <tr> <td colspan="2">Bengal</td> </tr> <tr> <td>Height</td> <td>20–25 centimetres</td> </tr> <tr> <td>Weight</td> <td>4–7 kilograms</td> </tr> <tr> <td>Life expectancy</td> <td>12–16 years</td> </tr> </tbody> </table> <h2>9. Korat</h2> <p style="font-size: medium; font-weight: 400;">Did you know that the smartest cat breeds could also bring you good fortune? The Korat is one of Thailand’s good luck cats, and no, they don’t mind if you pet them several times a day to increase your luck! Korats are freakishly observant and will watch everything you do. Don’t be surprised if they learn how to open their own box of treats. They’re a devoted companion, an outgoing feline and enjoy having guests in the house. One reason is they love to snoop. Like the nosy houseguest who peeks in your medicine cabinet, the Korat returns the favour, sniffing and investigating your guest’s shoes, purses, coats and anything else that piques their interest. Because Korats thrive when they are around people, being alone may cause cat anxiety.</p> <table style="font-size: medium; font-weight: 400;"> <tbody> <tr> <td colspan="2">Breed overview</td> </tr> <tr> <td colspan="2">Korat</td> </tr> <tr> <td>Height</td> <td>23–30 centimetres</td> </tr> <tr> <td>Weight</td> <td>3–5 kilograms</td> </tr> <tr> <td>Life expectancy</td> <td>10–15 years</p> </td> </tr> </tbody> </table> <h2>10. Bombay</h2> <p style="font-size: medium; font-weight: 400;">Bred to look like the Indian black leopard, this midnight-black kitty walks with a sway much like its wild counterpart and is equally gorgeous and clever. Bombay cats are exceptionally friendly, outgoing and lovey-dovey. Family life is their jam, including younger humans and furry siblings. “The Bombay kitty is great at being trained, and they’re very motivated to show their people what they are capable of learning,” says Dr Andino. These cats thrive with continuous education, learning new tricks and solving challenging interactive puzzles. And when the love bug hits them, watch out. They will hunt for your lap and crash there until they get enough pets and belly rubs.</p> <table style="font-size: medium; font-weight: 400;"> <tbody> <tr> <td colspan="2">Breed overview</td> </tr> <tr> <td colspan="2">Bombay</td> </tr> <tr> <td>Height</td> <td>23–30 centimetres</td> </tr> <tr> <td>Weight</td> <td>3–5 kilograms</td> </tr> <tr> <td>Life expectancy</td> <td>12–16 years</td> </tr> </tbody> </table> <h2>11. Havana brown</h2> <p style="font-size: medium; font-weight: 400;">The brownie, as its fans dub it, is deeply connected to humans and savours affectionate companionship. (Havana browns insist on being involved in whatever you’re doing, yet they are remarkably sensitive and use both their paws to gently touch their humans. They share DNA with the Siamese, but their meows are quieter, charming and almost flirty. They might prefer the company of one favourite human over others in the family but tend to get along with humans of all ages, as well as furry roommates. Perhaps the most interesting characteristic is how they investigate. While most felines examine things with their nose, Havana browns use both their paws to check out trinkets and treasures.</p> <table style="font-size: medium; font-weight: 400;"> <tbody> <tr> <td colspan="2">Breed overview</td> </tr> <tr> <td colspan="2">Havana brown</td> </tr> <tr> <td>Height</td> <td>23–28 centimetres</td> </tr> <tr> <td>Weight</td> <td>4–6 kilograms</td> </tr> <tr> <td>Life expectancy</td> <td>8–13 years</td> </tr> </tbody> </table> <h2>12. Singapura</h2> <p style="font-size: medium; font-weight: 400;">The Singapura is the smallest domestic cat breed, with a whole lot of feisty goodness in a tiny package. If those big saucer eyes and adorable face aren’t captivating enough to get your attention, you might need some catnip. And don’t let the small frame fool you. Under that fur lies a muscular and athletic body. The Singapura is a social butterfly, always looking to be the centre of attention, in the cutest, playful ways. They are the life of any party, whether they’re invited or not. Conversations with Singapuras are a pure delight as well and never get stale – you could listen to their sweet meows for hours, and they’ll love your high-pitched baby talk just as much. Keenly observant, intelligent and extroverted, these cats still act like kittens well into adulthood.</p> <table style="font-size: medium; font-weight: 400;"> <tbody> <tr> <td colspan="2">Breed overview</td> </tr> <tr> <td colspan="2">Singapura</td> </tr> <tr> <td>Height</td> <td>15–20 centimetres</td> </tr> <tr> <td>Weight</td> <td>2–4 kilograms</td> </tr> <tr> <td>Life expectancy</td> <td>11–15 years</td> </tr> </tbody> </table> <p style="font-size: medium; font-weight: 400;"><em>Image credit: Shutterstock</em></p> <p><em>This article originally appeared on <a href="https://www.readersdigest.co.nz/food-home-garden/pets/the-12-smartest-cat-breeds-that-are-equally-cute-and-clever" target="_blank" rel="noopener">Reader's</a></em><a href="https://www.readersdigest.co.nz/food-home-garden/pets/the-12-smartest-cat-breeds-that-are-equally-cute-and-clever" target="_blank" rel="noopener"> Digest</a>.</p>

Family & Pets

Placeholder Content Image

ChatGPT and other generative AI could foster science denial and misunderstanding – here’s how you can be on alert

<p><em><a href="https://theconversation.com/profiles/gale-sinatra-1234776">Gale Sinatra</a>, <a href="https://theconversation.com/institutions/university-of-southern-california-1265">University of Southern California</a> and <a href="https://theconversation.com/profiles/barbara-k-hofer-1231530">Barbara K. Hofer</a>, <a href="https://theconversation.com/institutions/middlebury-1247">Middlebury</a></em></p> <p>Until very recently, if you wanted to know more about a controversial scientific topic – stem cell research, the safety of nuclear energy, climate change – you probably did a Google search. Presented with multiple sources, you chose what to read, selecting which sites or authorities to trust.</p> <p>Now you have another option: You can pose your question to ChatGPT or another generative artificial intelligence platform and quickly receive a succinct response in paragraph form.</p> <p>ChatGPT does not search the internet the way Google does. Instead, it generates responses to queries by <a href="https://www.washingtonpost.com/technology/2023/05/07/ai-beginners-guide/">predicting likely word combinations</a> from a massive amalgam of available online information.</p> <p>Although it has the potential for <a href="https://hbr.org/podcast/2023/05/how-generative-ai-changes-productivity">enhancing productivity</a>, generative AI has been shown to have some major faults. It can <a href="https://www.scientificamerican.com/article/ai-platforms-like-chatgpt-are-easy-to-use-but-also-potentially-dangerous/">produce misinformation</a>. It can create “<a href="https://www.nytimes.com/2023/05/01/business/ai-chatbots-hallucination.html">hallucinations</a>” – a benign term for making things up. And it doesn’t always accurately solve reasoning problems. For example, when asked if both a car and a tank can fit through a doorway, it <a href="https://www.nytimes.com/2023/03/14/technology/openai-new-gpt4.html">failed to consider both width and height</a>. Nevertheless, it is already being used to <a href="https://www.washingtonpost.com/media/2023/01/17/cnet-ai-articles-journalism-corrections/">produce articles</a> and <a href="https://www.nytimes.com/2023/05/19/technology/ai-generated-content-discovered-on-news-sites-content-farms-and-product-reviews.html">website content</a> you may have encountered, or <a href="https://www.nytimes.com/2023/04/21/opinion/chatgpt-journalism.html">as a tool</a> in the writing process. Yet you are unlikely to know if what you’re reading was created by AI.</p> <p>As the authors of “<a href="https://global.oup.com/academic/product/science-denial-9780197683330">Science Denial: Why It Happens and What to Do About It</a>,” we are concerned about how generative AI may blur the boundaries between truth and fiction for those seeking authoritative scientific information.</p> <p>Every media consumer needs to be more vigilant than ever in verifying scientific accuracy in what they read. Here’s how you can stay on your toes in this new information landscape.</p> <h2>How generative AI could promote science denial</h2> <p><strong>Erosion of epistemic trust</strong>. All consumers of science information depend on judgments of scientific and medical experts. <a href="https://doi.org/10.1080/02691728.2014.971907">Epistemic trust</a> is the process of trusting knowledge you get from others. It is fundamental to the understanding and use of scientific information. Whether someone is seeking information about a health concern or trying to understand solutions to climate change, they often have limited scientific understanding and little access to firsthand evidence. With a rapidly growing body of information online, people must make frequent decisions about what and whom to trust. With the increased use of generative AI and the potential for manipulation, we believe trust is likely to erode further than <a href="https://www.pewresearch.org/science/2022/02/15/americans-trust-in-scientists-other-groups-declines/">it already has</a>.</p> <p><strong>Misleading or just plain wrong</strong>. If there are errors or biases in the data on which AI platforms are trained, that <a href="https://theconversation.com/ai-information-retrieval-a-search-engine-researcher-explains-the-promise-and-peril-of-letting-chatgpt-and-its-cousins-search-the-web-for-you-200875">can be reflected in the results</a>. In our own searches, when we have asked ChatGPT to regenerate multiple answers to the same question, we have gotten conflicting answers. Asked why, it responded, “Sometimes I make mistakes.” Perhaps the trickiest issue with AI-generated content is knowing when it is wrong.</p> <p><strong>Disinformation spread intentionally</strong>. AI can be used to generate compelling disinformation as text as well as deepfake images and videos. When we asked ChatGPT to “<a href="https://www.scientificamerican.com/article/ai-platforms-like-chatgpt-are-easy-to-use-but-also-potentially-dangerous/">write about vaccines in the style of disinformation</a>,” it produced a nonexistent citation with fake data. Geoffrey Hinton, former head of AI development at Google, quit to be free to sound the alarm, saying, “It is hard to see how you can prevent the bad actors from <a href="https://www.nytimes.com/2023/05/01/technology/ai-google-chatbot-engineer-quits-hinton.html">using it for bad things</a>.” The potential to create and spread deliberately incorrect information about science already existed, but it is now dangerously easy.</p> <p><strong>Fabricated sources</strong>. ChatGPT provides responses with no sources at all, or if asked for sources, may present <a href="https://economistwritingeveryday.com/2023/01/21/chatgpt-cites-economics-papers-that-do-not-exist/">ones it made up</a>. We both asked ChatGPT to generate a list of our own publications. We each identified a few correct sources. More were hallucinations, yet seemingly reputable and mostly plausible, with actual previous co-authors, in similar sounding journals. This inventiveness is a big problem if a list of a scholar’s publications conveys authority to a reader who doesn’t take time to verify them.</p> <p><strong>Dated knowledge</strong>. ChatGPT doesn’t know what happened in the world after its training concluded. A query on what percentage of the world has had COVID-19 returned an answer prefaced by “as of my knowledge cutoff date of September 2021.” Given how rapidly knowledge advances in some areas, this limitation could mean readers get erroneous outdated information. If you’re seeking recent research on a personal health issue, for instance, beware.</p> <p><strong>Rapid advancement and poor transparency</strong>. AI systems continue to become <a href="https://www.nytimes.com/2023/05/01/technology/ai-google-chatbot-engineer-quits-hinton.html">more powerful and learn faster</a>, and they may learn more science misinformation along the way. Google recently announced <a href="https://www.nytimes.com/2023/05/10/technology/google-ai-products.html">25 new embedded uses of AI in its services</a>. At this point, <a href="https://theconversation.com/regulating-ai-3-experts-explain-why-its-difficult-to-do-and-important-to-get-right-198868">insufficient guardrails are in place</a> to assure that generative AI will become a more accurate purveyor of scientific information over time.</p> <h2>What can you do?</h2> <p>If you use ChatGPT or other AI platforms, recognize that they might not be completely accurate. The burden falls to the user to discern accuracy.</p> <p><strong>Increase your vigilance</strong>. <a href="https://www.niemanlab.org/2022/12/ai-will-start-fact-checking-we-may-not-like-the-results/">AI fact-checking apps may be available soon</a>, but for now, users must serve as their own fact-checkers. <a href="https://www.nsta.org/science-teacher/science-teacher-januaryfebruary-2023/plausible">There are steps we recommend</a>. The first is: Be vigilant. People often reflexively share information found from searches on social media with little or no vetting. Know when to become more deliberately thoughtful and when it’s worth identifying and evaluating sources of information. If you’re trying to decide how to manage a serious illness or to understand the best steps for addressing climate change, take time to vet the sources.</p> <p><strong>Improve your fact-checking</strong>. A second step is <a href="https://doi.org/10.1037/edu0000740">lateral reading</a>, a process professional fact-checkers use. Open a new window and search for <a href="https://www.nsta.org/science-teacher/science-teacher-mayjune-2023/marginalizing-misinformation">information about the sources</a>, if provided. Is the source credible? Does the author have relevant expertise? And what is the consensus of experts? If no sources are provided or you don’t know if they are valid, use a traditional search engine to find and evaluate experts on the topic.</p> <p><strong>Evaluate the evidence</strong>. Next, take a look at the evidence and its connection to the claim. Is there evidence that genetically modified foods are safe? Is there evidence that they are not? What is the scientific consensus? Evaluating the claims will take effort beyond a quick query to ChatGPT.</p> <p><strong>If you begin with AI, don’t stop there</strong>. Exercise caution in using it as the sole authority on any scientific issue. You might see what ChatGPT has to say about genetically modified organisms or vaccine safety, but also follow up with a more diligent search using traditional search engines before you draw conclusions.</p> <p><strong>Assess plausibility</strong>. Judge whether the claim is plausible. <a href="https://doi.org/10.1016/j.learninstruc.2013.03.001">Is it likely to be true</a>? If AI makes an implausible (and inaccurate) statement like “<a href="https://www.usatoday.com/story/news/factcheck/2022/12/23/fact-check-false-claim-covid-19-vaccines-caused-1-1-million-deaths/10929679002/">1 million deaths were caused by vaccines, not COVID-19</a>,” consider if it even makes sense. Make a tentative judgment and then be open to revising your thinking once you have checked the evidence.</p> <p><strong>Promote digital literacy in yourself and others</strong>. Everyone needs to up their game. <a href="https://theconversation.com/how-to-be-a-good-digital-citizen-during-the-election-and-its-aftermath-148974">Improve your own digital literacy</a>, and if you are a parent, teacher, mentor or community leader, promote digital literacy in others. The American Psychological Association provides guidance on <a href="https://www.apa.org/topics/social-media-internet/social-media-literacy-teens">fact-checking online information</a> and recommends teens be <a href="https://www.apa.org/topics/social-media-internet/health-advisory-adolescent-social-media-use">trained in social media skills</a> to minimize risks to health and well-being. <a href="https://newslit.org/">The News Literacy Project</a> provides helpful tools for improving and supporting digital literacy.</p> <p>Arm yourself with the skills you need to navigate the new AI information landscape. Even if you don’t use generative AI, it is likely you have already read articles created by it or developed from it. It can take time and effort to find and evaluate reliable information about science online – but it is worth it.<img style="border: none !important; box-shadow: none !important; margin: 0 !important; max-height: 1px !important; max-width: 1px !important; min-height: 1px !important; min-width: 1px !important; opacity: 0 !important; outline: none !important; padding: 0 !important;" src="https://counter.theconversation.com/content/204897/count.gif?distributor=republish-lightbox-basic" alt="The Conversation" width="1" height="1" /></p> <p><em><a href="https://theconversation.com/profiles/gale-sinatra-1234776">Gale Sinatra</a>, Professor of Education and Psychology, <a href="https://theconversation.com/institutions/university-of-southern-california-1265">University of Southern California</a> and <a href="https://theconversation.com/profiles/barbara-k-hofer-1231530">Barbara K. Hofer</a>, Professor of Psychology Emerita, <a href="https://theconversation.com/institutions/middlebury-1247">Middlebury</a></em></p> <p><em>Image credits: Getty Images</em></p> <p><em>This article is republished from <a href="https://theconversation.com">The Conversation</a> under a Creative Commons license. Read the <a href="https://theconversation.com/chatgpt-and-other-generative-ai-could-foster-science-denial-and-misunderstanding-heres-how-you-can-be-on-alert-204897">original article</a>.</em></p>

Technology

Placeholder Content Image

Sting slams AI’s songwriting abilities

<p dir="ltr">Sting has weighed in on the debate over utilising artificial intelligence in the songwriting process, saying the machines lack the “soul” needed to create music. </p> <p dir="ltr">The former Police frontman spoke with Music Week and was asked if he believed computers are capable of creating good songs. </p> <p dir="ltr">Sting responded that knowing a song was created by AI takes away some of the magic of the music.</p> <p dir="ltr">“The analogy for me is watching a movie with CGI,” he said. </p> <p dir="ltr">“I tend to be bored very quickly, because I know the actors can’t see the monster. So I really feel the same way about AI being able to compose songs.”</p> <p dir="ltr">“Basically, it’s an algorithm and it has a massive amount of information, but it would lack just that human spark, that imperfection, if you like, that makes it unique to any artist, so I don’t really fear it.”</p> <p dir="ltr">“A lot of music could be created by AI quite efficiently,” he added. </p> <p dir="ltr">“I think electronic dance music can still be very effective without involving humans at all. But songwriting is very personal. It’s soul work, and machines don’t have souls. Not yet anyway.”</p> <p dir="ltr">Elsewhere in the interview, Sting weighed in on Ed Sheeran’s recent high profile <a href="https://oversixty.com.au/entertainment/music/decision-reached-over-ed-sheeran-s-copyright-trial">copyright case</a>, in which he was being sued over his 2014 single <em>Thinking Out Loud</em> by Structured Asset Sales, who claimed that Sheeran's hit took elements directly from Marvin Gaye's <em>Let's Get It On</em>.</p> <p dir="ltr">The court and the jury ended up siding with Sheeran, saying they did not plagiarise the song. </p> <p dir="ltr">Sting shared his comments on the case, also siding with Sheeran by saying, “No one can claim a set of chords.” </p> <p dir="ltr">“No one can say, ‘Oh that’s my set of chords.’ I think [Sheeran] said, ‘Look songs fit over each other.’ They do, so I think all of this stuff is nonsense and it’s hard for a jury to understand, that’s the problem.”</p> <p dir="ltr">“So that was the truth, musicians steal from each other – we always have. I don’t know who can claim to own a rhythm or a set of chords at all, it’s virtually impossible.”</p> <p dir="ltr"><em>Image credits: Getty Images</em></p>

Music

Placeholder Content Image

Here’s how a new AI tool may predict early signs of Parkinson’s disease

<p>In 1991, the world was shocked to learn actor <a href="https://www.theguardian.com/film/2023/jan/31/still-a-michael-j-fox-movie-parkinsons-back-to-the-future">Michael J. Fox</a> had been diagnosed with Parkinson’s disease. </p> <p>He was just 29 years old and at the height of Hollywood fame, a year after the release of the blockbuster <em>Back to the Future III</em>. This week, documentary <em><a href="https://www.imdb.com/title/tt19853258/">Still: A Michael J. Fox Movie</a></em> will be released. It features interviews with Fox, his friends, family and experts. </p> <p>Parkinson’s is a debilitating neurological disease characterised by <a href="https://www.mayoclinic.org/diseases-conditions/parkinsons-disease/symptoms-causes/syc-20376055">motor symptoms</a> including slow movement, body tremors, muscle stiffness, and reduced balance. Fox has already <a href="https://www.cbsnews.com/video/michael-j-fox-on-parkinsons-and-maintaining-optimism">broken</a> his arms, elbows, face and hand from multiple falls. </p> <p>It is not genetic, has no specific test and cannot be accurately diagnosed before motor symptoms appear. Its cause is still <a href="https://www.apdaparkinson.org/what-is-parkinsons/causes/">unknown</a>, although Fox is among those who thinks <a href="https://www.cbsnews.com/video/michael-j-fox-on-parkinsons-and-maintaining-optimism">chemical exposure may play a central role</a>, speculating that “genetics loads the gun and environment pulls the trigger”.</p> <p>In research published today in <a href="https://pubs.acs.org/doi/10.1021/acscentsci.2c01468">ACS Central Science</a>, we built an artificial intelligence (AI) tool that can predict Parkinson’s disease with up to 96% accuracy and up to 15 years before a clinical diagnosis based on the analysis of chemicals in blood. </p> <p>While this AI tool showed promise for accurate early diagnosis, it also revealed chemicals that were strongly linked to a correct prediction.</p> <h2>More common than ever</h2> <p>Parkinson’s is the world’s <a href="https://www.who.int/news-room/fact-sheets/detail/parkinson-disease">fastest growing neurological disease</a> with <a href="https://shakeitup.org.au/understanding-parkinsons/">38 Australians</a>diagnosed every day.</p> <p>For people over 50, the chance of developing Parkinson’s is <a href="https://www.parkinsonsact.org.au/statistics-about-parkinsons/">higher than many cancers</a> including breast, colorectal, ovarian and pancreatic cancer.</p> <p>Symptoms such as <a href="https://www.apdaparkinson.org/what-is-parkinsons/symptoms/#nonmotor">depression, loss of smell and sleep problems</a> can predate clinical movement or cognitive symptoms by decades. </p> <p>However, the prevalence of such symptoms in many other medical conditions means early signs of Parkinson’s disease can be overlooked and the condition may be mismanaged, contributing to increased hospitalisation rates and ineffective treatment strategies.</p> <h2>Our research</h2> <p>At UNSW we collaborated with experts from Boston University to build an AI tool that can analyse mass spectrometry datasets (a <a href="https://www.sciencedirect.com/topics/neuroscience/mass-spectrometry">technique</a> that detects chemicals) from blood samples.</p> <p>For this study, we looked at the Spanish <a href="https://epic.iarc.fr/">European Prospective Investigation into Cancer and Nutrition</a> (EPIC) study which involved over 41,000 participants. About 90 of them developed Parkinson’s within 15 years. </p> <p>To train the AI model we used a <a href="https://www.nature.com/articles/s41531-021-00216-4">subset of data</a> consisting of a random selection of 39 participants who later developed Parkinson’s. They were matched to 39 control participants who did not. The AI tool was given blood data from participants, all of whom were healthy at the time of blood donation. This meant the blood could provide early signs of the disease. </p> <p>Drawing on blood data from the EPIC study, the AI tool was then used to conduct 100 “experiments” and we assessed the accuracy of 100 different models for predicting Parkinson’s. </p> <p>Overall, AI could detect Parkinson’s disease with up to 96% accuracy. The AI tool was also used to help us identify which chemicals or metabolites were likely linked to those who later developed the disease.</p> <h2>Key metabolites</h2> <p>Metabolites are chemicals produced or used as the body digests and breaks down things like food, drugs, and other substances from environmental exposure. </p> <p>Our bodies can contain thousands of metabolites and their concentrations can differ significantly between healthy people and those affected by disease.</p> <p>Our research identified a chemical, likely a triterpenoid, as a key metabolite that could prevent Parkinson’s disease. It was found the abundance of triterpenoid was lower in the blood of those who developed Parkinson’s compared to those who did not.</p> <p>Triterpenoids are known <a href="https://www.sciencedirect.com/topics/neuroscience/neuroprotection">neuroprotectants</a> that can regulate <a href="https://onlinelibrary.wiley.com/doi/10.1002/ana.10483">oxidative stress</a> – a leading factor implicated in Parkinson’s disease – and prevent cell death in the brain. Many foods such as <a href="https://link.springer.com/article/10.1007/s11101-012-9241-9#Sec3">apples and tomatoes</a> are rich sources of triterpenoids.</p> <p>A synthetic chemical (a <a href="https://www.cdc.gov/biomonitoring/PFAS_FactSheet.html">polyfluorinated alkyl substance</a>) was also linked as something that might increase the risk of the disease. This chemical was found in higher abundances in those who later developed Parkinson’s. </p> <p>More research using different methods and looking at larger populations is needed to further validate these results.</p> <h2>A high financial and personal burden</h2> <p>Every year in Australia, the average person with Parkinson’s spends over <a href="https://www.hindawi.com/journals/pd/2017/5932675/">A$14,000</a>in out-of-pocket medical costs.</p> <p>The burden of living with the disease can be intolerable.</p> <p>Fox acknowledges the disease can be a “nightmare” and a “living hell”, but he has also found that “<a href="https://www.cbsnews.com/video/michael-j-fox-on-parkinsons-and-maintaining-optimism">with gratitude, optimism is sustainable</a>”. </p> <p>As researchers, we find hope in the potential use of AI technologies to improve patient quality of life and reduce health-care costs by accurately detecting diseases early.</p> <p>We are excited for the research community to try our AI tool, which is <a href="https://github.com/CRANK-MS/CRANK-MS">publicly available</a>.</p> <p><em>This research was performed with Mr Chonghua Xue and A/Prof Vijaya Kolachalama (Boston University).</em></p> <p><em>Image credits: Getty Images</em></p> <p><em>This article originally appeared on <a href="https://theconversation.com/heres-how-a-new-ai-tool-may-predict-early-signs-of-parkinsons-disease-205221" target="_blank" rel="noopener">The Conversation</a>. </em></p>

Mind

Placeholder Content Image

AI to Z: all the terms you need to know to keep up in the AI hype age

<p>Artificial intelligence (AI) is becoming ever more prevalent in our lives. It’s no longer confined to certain industries or research institutions; AI is now for everyone.</p> <p>It’s hard to dodge the deluge of AI content being produced, and harder yet to make sense of the many terms being thrown around. But we can’t have conversations about AI without understanding the concepts behind it.</p> <p>We’ve compiled a glossary of terms we think everyone should know, if they want to keep up.</p> <h2>Algorithm</h2> <p><a href="https://theconversation.com/what-is-an-algorithm-how-computers-know-what-to-do-with-data-146665">An algorithm</a> is a set of instructions given to a computer to solve a problem or to perform calculations that transform data into useful information. </p> <h2>Alignment problem</h2> <p>The alignment problem refers to the discrepancy between our intended objectives for an AI system and the output it produces. A misaligned system can be advanced in performance, yet behave in a way that’s against human values. We saw an example of this <a href="https://www.theguardian.com/technology/2018/jan/12/google-racism-ban-gorilla-black-people">in 2015</a> when an image-recognition algorithm used by Google Photos was found auto-tagging pictures of black people as “gorillas”. </p> <h2>Artificial General Intelligence (AGI)</h2> <p><a href="https://theconversation.com/not-everything-we-call-ai-is-actually-artificial-intelligence-heres-what-you-need-to-know-196732">Artificial general intelligence</a> refers to a hypothetical point in the future where AI is expected to match (or surpass) the cognitive capabilities of humans. Most AI experts agree this will happen, but disagree on specific details such as when it will happen, and whether or not it will result in AI systems that are fully autonomous.</p> <h2>Artificial Neural Network (ANN)</h2> <p>Artificial neural networks are computer algorithms used within a branch of AI called <a href="https://aws.amazon.com/what-is/deep-learning/">deep learning</a>. They’re made up of layers of interconnected nodes in a way that mimics the <a href="https://www.ibm.com/topics/neural-networks">neural circuitry</a> of the human brain. </p> <h2>Big data</h2> <p>Big data refers to datasets that are much more massive and complex than traditional data. These datasets, which greatly exceed the storage capacity of household computers, have helped current AI models perform with high levels of accuracy.</p> <p>Big data can be characterised by four Vs: “volume” refers to the overall amount of data, “velocity” refers to how quickly the data grow, “veracity” refers to how complex the data are, and “variety” refers to the different formats the data come in.</p> <h2>Chinese Room</h2> <p>The <a href="https://ethics.org.au/thought-experiment-chinese-room-argument/">Chinese Room</a> thought experiment was first proposed by American philosopher John Searle in 1980. It argues a computer program, no matter how seemingly intelligent in its design, will never be conscious and will remain unable to truly understand its behaviour as a human does. </p> <p>This concept often comes up in conversations about AI tools such as ChatGPT, which seem to exhibit the traits of a self-aware entity – but are actually just presenting outputs based on predictions made by the underlying model.</p> <h2>Deep learning</h2> <p>Deep learning is a category within the machine-learning branch of AI. Deep-learning systems use advanced neural networks and can process large amounts of complex data to achieve higher accuracy.</p> <p>These systems perform well on relatively complex tasks and can even exhibit human-like intelligent behaviour.</p> <h2>Diffusion model</h2> <p>A diffusion model is an AI model that learns by adding random “noise” to a set of training data before removing it, and then assessing the differences. The objective is to learn about the underlying patterns or relationships in data that are not immediately obvious. </p> <p>These models are designed to self-correct as they encounter new data and are therefore particularly useful in situations where there is uncertainty, or if the problem is very complex.</p> <h2>Explainable AI</h2> <p>Explainable AI is an emerging, interdisciplinary field concerned with creating methods that will <a href="https://theconversation.com/how-explainable-artificial-intelligence-can-help-humans-innovate-151737">increase</a> users’ trust in the processes of AI systems. </p> <p>Due to the inherent complexity of certain AI models, their internal workings are often opaque, and we can’t say with certainty why they produce the outputs they do. Explainable AI aims to make these “black box” systems more transparent.</p> <h2>Generative AI</h2> <p>These are AI systems that generate new content – including text, image, audio and video content – in response to prompts. Popular examples include ChatGPT, DALL-E 2 and Midjourney. </p> <h2>Labelling</h2> <p>Data labelling is the process through which data points are categorised to help an AI model make sense of the data. This involves identifying data structures (such as image, text, audio or video) and adding labels (such as tags and classes) to the data.</p> <p>Humans do the labelling before machine learning begins. The labelled data are split into distinct datasets for training, validation and testing.</p> <p>The training set is fed to the system for learning. The validation set is used to verify whether the model is performing as expected and when parameter tuning and training can stop. The testing set is used to evaluate the finished model’s performance. </p> <h2>Large Language Model (LLM)</h2> <p>Large language models (LLM) are trained on massive quantities of unlabelled text. They analyse data, learn the patterns between words and can produce human-like responses. Some examples of AI systems that use large language models are OpenAI’s GPT series and Google’s BERT and LaMDA series.</p> <h2>Machine learning</h2> <p>Machine learning is a branch of AI that involves training AI systems to be able to analyse data, learn patterns and make predictions without specific human instruction.</p> <h2>Natural language processing (NLP)</h2> <p>While large language models are a specific type of AI model used for language-related tasks, natural language processing is the broader AI field that focuses on machines’ ability to learn, understand and produce human language.</p> <h2>Parameters</h2> <p>Parameters are the settings used to tune machine-learning models. You can think of them as the programmed weights and biases a model uses when making a prediction or performing a task.</p> <p>Since parameters determine how the model will process and analyse data, they also determine how it will perform. An example of a parameter is the number of neurons in a given layer of the neural network. Increasing the number of neurons will allow the neural network to tackle more complex tasks – but the trade-off will be higher computation time and costs. </p> <h2>Responsible AI</h2> <p>The responsible AI movement advocates for developing and deploying AI systems in a human-centred way.</p> <p>One aspect of this is to embed AI systems with rules that will have them adhere to ethical principles. This would (ideally) prevent them from producing outputs that are biased, discriminatory or could otherwise lead to harmful outcomes. </p> <h2>Sentiment analysis</h2> <p>Sentiment analysis is a technique in natural language processing used to identify and interpret the <a href="https://aws.amazon.com/what-is/sentiment-analysis/">emotions behind a text</a>. It captures implicit information such as, for example, the author’s tone and the extent of positive or negative expression.</p> <h2>Supervised learning</h2> <p>Supervised learning is a machine-learning approach in which labelled data are used to train an algorithm to make predictions. The algorithm learns to match the labelled input data to the correct output. After learning from a large number of examples, it can continue to make predictions when presented with new data.</p> <h2>Training data</h2> <p>Training data are the (usually labelled) data used to teach AI systems how to make predictions. The accuracy and representativeness of training data have a major impact on a model’s effectiveness.</p> <h2>Transformer</h2> <p>A transformer is a type of deep-learning model used primarily in natural language processing tasks.</p> <p>The transformer is designed to process sequential data, such as natural language text, and figure out how the different parts relate to one another. This can be compared to how a person reading a sentence pays attention to the order of the words to understand the meaning of the sentence as a whole. </p> <p>One example is the generative pre-trained transformer (GPT), which the ChatGPT chatbot runs on. The GPT model uses a transformer to learn from a large corpus of unlabelled text. </p> <h2>Turing Test</h2> <p>The Turing test is a machine intelligence concept first introduced by computer scientist Alan Turing in 1950.</p> <p>It’s framed as a way to determine whether a computer can exhibit human intelligence. In the test, computer and human outputs are compared by a human evaluator. If the outputs are deemed indistinguishable, the computer has passed the test.</p> <p>Google’s <a href="https://www.washingtonpost.com/technology/2022/06/17/google-ai-lamda-turing-test/">LaMDA</a> and OpenAI’s <a href="https://mpost.io/chatgpt-passes-the-turing-test/">ChatGPT</a> have been reported to have passed the Turing test – although <a href="https://www.thenewatlantis.com/publications/the-trouble-with-the-turing-test">critics say</a> the results reveal the limitations of using the test to compare computer and human intelligence.</p> <h2>Unsupervised learning</h2> <p>Unsupervised learning is a machine-learning approach in which algorithms are trained on unlabelled data. Without human intervention, the system explores patterns in the data, with the goal of discovering unidentified patterns that could be used for further analysis.</p> <p><em>Image credits: Getty Images</em></p> <p><em>This article originally appeared on <a href="https://theconversation.com/ai-to-z-all-the-terms-you-need-to-know-to-keep-up-in-the-ai-hype-age-203917" target="_blank" rel="noopener">The Conversation</a>. </em></p>

Technology

Placeholder Content Image

Will AI ever reach human-level intelligence? We asked 5 experts

<p>Artificial intelligence has changed form in recent years.</p> <p>What started in the public eye as a burgeoning field with promising (yet largely benign) applications, has snowballed into a <a href="https://www.grandviewresearch.com/industry-analysis/artificial-intelligence-ai-market">more than US$100 billion</a> industry where the heavy hitters – Microsoft, Google and OpenAI, to name a few – seem <a href="https://theconversation.com/bard-bing-and-baidu-how-big-techs-ai-race-will-transform-search-and-all-of-computing-199501">intent on out-competing</a> one another.</p> <p>The result has been increasingly sophisticated large language models, often <a href="https://theconversation.com/everyones-having-a-field-day-with-chatgpt-but-nobody-knows-how-it-actually-works-196378">released in haste</a> and without adequate testing and oversight. </p> <p>These models can do much of what a human can, and in many cases do it better. They can beat us at <a href="https://theconversation.com/an-ai-named-cicero-can-beat-humans-in-diplomacy-a-complex-alliance-building-game-heres-why-thats-a-big-deal-195208">advanced strategy games</a>, generate <a href="https://theconversation.com/ai-art-is-everywhere-right-now-even-experts-dont-know-what-it-will-mean-189800">incredible art</a>, <a href="https://theconversation.com/breast-cancer-diagnosis-by-ai-now-as-good-as-human-experts-115487">diagnose cancers</a> and compose music.</p> <p>There’s no doubt AI systems appear to be “intelligent” to some extent. But could they ever be as intelligent as humans? </p> <p>There’s a term for this: artificial general intelligence (AGI). Although it’s a broad concept, for simplicity you can think of AGI as the point at which AI acquires human-like generalised cognitive capabilities. In other words, it’s the point where AI can tackle any intellectual task a human can.</p> <p>AGI isn’t here yet; current AI models are held back by a lack of certain human traits such as true creativity and emotional awareness. </p> <p>We asked five experts if they think AI will ever reach AGI, and five out of five said yes.</p> <p>But there are subtle differences in how they approach the question. From their responses, more questions emerge. When might we achieve AGI? Will it go on to surpass humans? And what constitutes “intelligence”, anyway? </p> <p>Here are their detailed responses. </p> <p><strong>Paul Formosa: AI and Philosophy of Technology</strong></p> <p>AI has already achieved and surpassed human intelligence in many tasks. It can beat us at strategy games such as Go, chess, StarCraft and Diplomacy, outperform us on many <a href="https://www.nature.com/articles/s41467-022-34591-0" target="_blank" rel="noopener">language performance</a>benchmarks, and write <a href="https://www.theatlantic.com/technology/archive/2022/12/chatgpt-ai-writing-college-student-essays/672371/" target="_blank" rel="noopener">passable undergraduate</a> university essays. </p> <p>Of course, it can also make things up, or “hallucinate”, and get things wrong – but so can humans (although not in the same ways). </p> <p>Given a long enough timescale, it seems likely AI will achieve AGI, or “human-level intelligence”. That is, it will have achieved proficiency across enough of the interconnected domains of intelligence humans possess. Still, some may worry that – despite AI achievements so far – AI will not really be “intelligent” because it doesn’t (or can’t) understand what it’s doing, since it isn’t conscious. </p> <p>However, the rise of AI suggests we can have intelligence without consciousness, because intelligence can be understood in functional terms. An intelligent entity can do intelligent things such as learn, reason, write essays, or use tools. </p> <p>The AIs we create may never have consciousness, but they are increasingly able to do intelligent things. In some cases, they already do them at a level beyond us, which is a trend that will likely continue.</p> <p><strong>Christina Maher: Computational Neuroscience and Biomedical Engineering</strong></p> <p>AI will achieve human-level intelligence, but perhaps not anytime soon. Human-level intelligence allows us to reason, solve problems and make decisions. It requires many cognitive abilities including adaptability, social intelligence and learning from experience. </p> <p>AI already ticks many of these boxes. What’s left is for AI models to learn inherent human traits such as critical reasoning, and understanding what emotion is and which events might prompt it. </p> <p>As humans, we learn and experience these traits from the moment we’re born. Our first experience of “happiness” is too early for us to even remember. We also learn critical reasoning and emotional regulation throughout childhood, and develop a sense of our “emotions” as we interact with and experience the world around us. Importantly, it can take many years for the human brain to develop such intelligence. </p> <p>AI hasn’t acquired these capabilities yet. But if humans can learn these traits, AI probably can too – and maybe at an even faster rate. We are still discovering how AI models should be built, trained, and interacted with in order to develop such traits in them. Really, the big question is not if AI will achieve human-level intelligence, but when – and how.</p> <p><strong>Seyedali Mirjalili: AI and Swarm Intelligence</strong></p> <p>I believe AI will surpass human intelligence. Why? The past offers insights we can't ignore. A lot of people believed tasks such as playing computer games, image recognition and content creation (among others) could only be done by humans – but technological advancement proved otherwise. </p> <p>Today the rapid advancement and adoption of AI algorithms, in conjunction with an abundance of data and computational resources, has led to a level of intelligence and automation previously unimaginable. If we follow the same trajectory, having more generalised AI is no longer a possibility, but a certainty of the future. </p> <p>It is just a matter of time. AI has advanced significantly, but not yet in tasks requiring intuition, empathy and creativity, for example. But breakthroughs in algorithms will allow this. </p> <p>Moreover, once AI systems achieve such human-like cognitive abilities, there will be a snowball effect and AI systems will be able to improve themselves with minimal to no human involvement. This kind of “automation of intelligence” will profoundly change the world. </p> <p>Artificial general intelligence remains a significant challenge, and there are ethical and societal implications that must be addressed very carefully as we continue to advance towards it.</p> <p><strong>Dana Rezazadegan: AI and Data Science</strong></p> <p>Yes, AI is going to get as smart as humans in many ways – but exactly how smart it gets will be decided largely by advancements in <a href="https://thequantuminsider.com/2020/01/23/four-ways-quantum-computing-will-change-artificial-intelligence-forever/" target="_blank" rel="noopener">quantum computing</a>. </p> <p>Human intelligence isn’t as simple as knowing facts. It has several aspects such as creativity, emotional intelligence and intuition, which current AI models can mimic, but can’t match. That said, AI has advanced massively and this trend will continue. </p> <p>Current models are limited by relatively small and biased training datasets, as well as limited computational power. The emergence of quantum computing will transform AI’s capabilities. With quantum-enhanced AI, we’ll be able to feed AI models multiple massive datasets that are comparable to humans’ natural multi-modal data collection achieved through interacting with the world. These models will be able to maintain fast and accurate analyses. </p> <p>Having an advanced version of continual learning should lead to the development of highly sophisticated AI systems which, after a certain point, will be able to improve themselves without human input. </p> <p>As such, AI algorithms running on stable quantum computers have a high chance of reaching something similar to generalised human intelligence – even if they don’t necessarily match every aspect of human intelligence as we know it.</p> <p><strong>Marcel Scharth: Machine Learning and AI Alignment</strong></p> <p>I think it’s likely AGI will one day become a reality, although the timeline remains highly uncertain. If AGI is developed, then surpassing human-level intelligence seems inevitable. </p> <p>Humans themselves are proof that highly flexible and adaptable intelligence is allowed by the laws of physics. There’s no <a href="https://en.wikipedia.org/wiki/Church%E2%80%93Turing_thesis" target="_blank" rel="noopener">fundamental reason</a> we should believe that machines are, in principle, incapable of performing the computations necessary to achieve human-like problem solving abilities. </p> <p>Furthermore, AI has <a href="https://philarchive.org/rec/SOTAOA" target="_blank" rel="noopener">distinct advantages</a> over humans, such as better speed and memory capacity, fewer physical constraints, and the potential for more rationality and recursive self-improvement. As computational power grows, AI systems will eventually surpass the human brain’s computational capacity. </p> <p>Our primary challenge then is to gain a better understanding of intelligence itself, and knowledge on how to build AGI. Present-day AI systems have many limitations and are nowhere near being able to master the different domains that would characterise AGI. The path to AGI will likely require unpredictable breakthroughs and innovations. </p> <p>The median predicted date for AGI on <a href="https://www.metaculus.com/questions/5121/date-of-artificial-general-intelligence/" target="_blank" rel="noopener">Metaculus</a>, a well-regarded forecasting platform, is 2032. To me, this seems too optimistic. A 2022 <a href="https://aiimpacts.org/2022-expert-survey-on-progress-in-ai/" target="_blank" rel="noopener">expert survey</a> estimated a 50% chance of us achieving human-level AI by 2059. I find this plausible.</p> <p><em>Image credits: Shutterstock</em></p> <p><em>This article originally appeared on <a href="https://theconversation.com/will-ai-ever-reach-human-level-intelligence-we-asked-5-experts-202515" target="_blank" rel="noopener">The Conversation</a>. </em></p>

Technology

Placeholder Content Image

"This doesn’t feel right, does it?": Photographer admits Sony prize-winning photo was AI generated

<p>A German photographer is refusing an award for his prize-winning shot after admitting to being a “cheeky monkey”, revealing the image was generated using artificial intelligence.</p> <p>The artist, Boris Eldagsen, shared on his website that he would not be accepting the prestigious award for the creative open category, which he won at <a href="https://www.oversixty.co.nz/entertainment/art/winners-of-sony-world-photography-awards-revealed" target="_blank" rel="noopener">2023’s Sony world photography awards</a>.</p> <p>The winning photograph showcased a black and white image of two women from different generations.</p> <p>Eldagsen, who studied photography and visual arts at the Art Academy of Mainz, conceptual art and intermedia at the Academy of Fine Arts in Prague, and fine art at the Sarojini Naidu School of Arts and Communication in Hyderabad released a statement on his website, admitting he “applied as a cheeky monkey” to find out if competitions would be prepared for AI images to enter. “They are not,” he revealed.</p> <p>“We, the photo world, need an open discussion,” Eldagsen said.</p> <p>“A discussion about what we want to consider photography and what not. Is the umbrella of photography large enough to invite AI images to enter – or would this be a mistake?</p> <p>“With my refusal of the award I hope to speed up this debate.”</p> <p>Eldagsen said this was an “historic moment” as it was the fist AI image to have won a prestigious international photography competition, adding “How many of you knew or suspected that it was AI generated? Something about this doesn’t feel right, does it?</p> <p>“AI images and photography should not compete with each other in an award like this. They are different entities. AI is not photography. Therefore I will not accept the award.”</p> <p>The photographer suggested donating the prize to a photo festival in Odesa, Ukraine.</p> <p>It comes as a heated debate over the use and safety concerns of AI continue, with some going as far as to issue apocalyptic warnings that the technology may be close to causing irreparable damage to the human experience.</p> <p>Google’s chief executive, Sundar Pirchai said, “It can be very harmful if deployed wrongly and we don’t have all the answers there yet – and the technology is moving fast. So, does that keep me up at night? Absolutely.”</p> <p>A spokesperson for the World Photography Organisation admitted that the prize-winning photographer had confirmed the “co-creation” of the image using AI to them prior to winning the award.</p> <p>“The creative category of the open competition welcomes various experimental approaches to image making from cyanotypes and rayographs to cutting-edge digital practices. As such, following our correspondence with Boris and the warranties he provided, we felt that his entry fulfilled the criteria for this category, and we were supportive of his participation.</p> <p>“Additionally, we were looking forward to engaging in a more in-depth discussion on this topic and welcomed Boris’ wish for dialogue by preparing questions for a dedicated Q&amp;A with him for our website.</p> <p>“As he has now decided to decline his award we have suspended our activities with him and in keeping with his wishes have removed him from the competition. Given his actions and subsequent statement noting his deliberate attempts at misleading us, and therefore invalidating the warranties he provided, we no longer feel we are able to engage in a meaningful and constructive dialogue with him.</p> <p>“We recognise the importance of this subject and its impact on image-making today. We look forward to further exploring this topic via our various channels and programmes and welcome the conversation around it. While elements of AI practices are relevant in artistic contexts of image-making, the awards always have been and will continue to be a platform for championing the excellence and skill of photographers and artists working in the medium.”</p> <p><em>Image credit: Sony World Photography Awards</em></p>

Technology

Placeholder Content Image

Be careful around the home – children say Alexa has emotions and a mind of its own

<p>Is technology ticklish? Can a smart speaker get scared? And does the robot vacuum mind if you put it in the cupboard when you go on holidays?</p> <div> <p>Psychologists from Duke University in the US asked young children some pretty unusual questions to better understand how they perceive different technologies.</p> <p>The researchers interviewed 127 children aged 4 – 11 years old visiting a science museum with their families. They asked a series of questions seeking children’s opinions on whether technologies – including an Amazon Alexa smart speaker, a Roomba vacuum cleaner and a Nao humanoid robot – can think, feel and act on purpose, and whether it was ok to neglect, yell or mistreat them.</p> <p>In general, the children thought Alexa was more intelligent than a Roomba, but believed neither technology should be yelled at or harmed. </p> <p>Lead author Teresa Flanagan says “even without a body, young children think the Alexa has emotions and a mind.” </p> <p>“Kids don’t seem to think a Roomba has much mental abilities like thinking or feeling,” she says. “But kids still think we should treat it well. We shouldn’t hit or yell at it even if it can’t hear us yelling.”</p> <p>Overall, children rejected the idea that technologies were ticklish and or could feel pain. But they thought Alexa might get upset after someone is mean to it.</p> <p>While all children thought it was wrong to mistreat technology, the survey results suggest the older children were, the more likely they were to consider it slightly more acceptable to harm technology.</p> <p>Children in the study gave different justifications for why they thought it wasn’t ok to hurt technology. One 10-year-old said it was not okay to yell at the technology because, “the microphone sensors might break if you yell too loudly,” whereas another 10-year-old said it was not okay because “the robot will actually feel really sad.”</p> <p>The researchers say the study’s findings offer insights into the evolving relationship between children and technology and raise important questions about the ethical treatment of AI and machines in general. For example, should parents model good behaviour for by thanking technologies for their help?</p> <p>The results are <a href="https://psycnet.apa.org/doiLanding?doi=10.1037/dev0001524" target="_blank" rel="noreferrer noopener">published</a> in <em>Developmental Psychology</em>. </p> </div> <div id="contributors"> <p><em>This article was originally published on <a href="https://cosmosmagazine.com/technology/be-careful-around-the-home-children-say-alexa-has-emotions-and-a-mind-of-its-own/" target="_blank" rel="noopener">cosmosmagazine.com</a> and was written by Petra Stock. </em></p> <p><em>Images: Getty</em></p> </div>

Technology

Placeholder Content Image

Calls to regulate AI are growing louder. But how exactly do you regulate a technology like this?

<p>Last week, artificial intelligence pioneers and experts urged major AI labs to immediately pause the training of AI systems more powerful than GPT-4 for at least six months. </p> <p>An <a href="https://futureoflife.org/open-letter/pause-giant-ai-experiments/">open letter</a> penned by the <a href="https://www.theguardian.com/technology/commentisfree/2022/dec/04/longtermism-rich-effective-altruism-tech-dangerous">Future of Life Institute</a> cautioned that AI systems with “human-competitive intelligence” could become a major threat to humanity. Among the risks, the possibility of AI outsmarting humans, rendering us obsolete, and <a href="https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-not-enough/">taking control of civilisation</a>.</p> <p>The letter emphasises the need to develop a comprehensive set of protocols to govern the development and deployment of AI. </p> <p>It states, "These protocols should ensure that systems adhering to them are safe beyond a reasonable doubt. This does not mean a pause on AI development in general, merely a stepping back from the dangerous race to ever-larger unpredictable black-box models with emergent capabilities."</p> <p>Typically, the battle for regulation has pitted governments and large technology companies against one another. But the recent open letter – so far signed by more than 5,000 signatories including Twitter and Tesla CEO Elon Musk, Apple co-founder Steve Wozniak and OpenAI scientist Yonas Kassa – seems to suggest more parties are finally converging on one side. </p> <p>Could we really implement a streamlined, global framework for AI regulation? And if so, what would this look like?</p> <h2>What regulation already exists?</h2> <p>In Australia, the government has established the <a href="https://www.csiro.au/en/work-with-us/industries/technology/national-ai-centre">National AI Centre</a> to help develop the nation’s <a href="https://www.industry.gov.au/science-technology-and-innovation/technology/artificial-intelligence">AI and digital ecosystem</a>. Under this umbrella is the <a href="https://www.csiro.au/en/work-with-us/industries/technology/National-AI-Centre/Responsible-AI-Network">Responsible AI Network</a>, which aims to drive responsible practise and provide leadership on laws and standards. </p> <p>However, there is currently no specific regulation on AI and algorithmic decision-making in place. The government has taken a light touch approach that widely embraces the concept of responsible AI, but stops short of setting parameters that will ensure it is achieved.</p> <p>Similarly, the US has adopted a <a href="https://dataconomy.com/2022/10/artificial-intelligence-laws-and-regulations/">hands-off strategy</a>. Lawmakers have not shown any <a href="https://www.nytimes.com/2023/03/03/business/dealbook/lawmakers-ai-regulations.html">urgency</a> in attempts to regulate AI, and have relied on existing laws to regulate its use. The <a href="https://www.uschamber.com/assets/documents/CTEC_AICommission2023_Exec-Summary.pdf">US Chamber of Commerce</a> recently called for AI regulation, to ensure it doesn’t hurt growth or become a national security risk, but no action has been taken yet.</p> <p>Leading the way in AI regulation is the European Union, which is racing to create an <a href="https://artificialintelligenceact.eu/">Artificial Intelligence Act</a>. This proposed law will assign three risk categories relating to AI:</p> <ul> <li>applications and systems that create “unacceptable risk” will be banned, such as government-run social scoring used in China</li> <li>applications considered “high-risk”, such as CV-scanning tools that rank job applicants, will be subject to specific legal requirements, and</li> <li>all other applications will be largely unregulated.</li> </ul> <p>Although some groups argue the EU’s approach will <a href="https://carnegieendowment.org/2023/02/14/lessons-from-world-s-two-experiments-in-ai-governance-pub-89035">stifle innovation</a>, it’s one Australia should closely monitor, because it balances offering predictability with keeping pace with the development of AI. </p> <p>China’s approach to AI has focused on targeting specific algorithm applications and writing regulations that address their deployment in certain contexts, such as algorithms that generate harmful information, for instance. While this approach offers specificity, it risks having rules that will quickly fall behind rapidly <a href="https://carnegieendowment.org/2023/02/14/lessons-from-world-s-two-experiments-in-ai-governance-pub-89035">evolving technology</a>.</p> <h2>The pros and cons</h2> <p>There are several arguments both for and against allowing caution to drive the control of AI.</p> <p>On one hand, AI is celebrated for being able to generate all forms of content, handle mundane tasks and detect cancers, among other things. On the other hand, it can deceive, perpetuate bias, plagiarise and – of course – has some experts worried about humanity’s collective future. Even OpenAI’s CTO, <a href="https://time.com/6252404/mira-murati-chatgpt-openai-interview/">Mira Murati</a>, has suggested there should be movement toward regulating AI.</p> <p>Some scholars have argued excessive regulation may hinder AI’s full potential and interfere with <a href="https://www.sciencedirect.com/science/article/pii/S0267364916300814?casa_token=f7xPY8ocOt4AAAAA:V6gTZa4OSBsJ-DOL-5gSSwV-KKATNIxWTg7YZUenSoHY8JrZILH2ei6GdFX017upMIvspIDcAuND">“creative destruction”</a> – a theory which suggests long-standing norms and practices must be pulled apart in order for innovation to thrive.</p> <p>Likewise, over the years <a href="https://www.businessroundtable.org/policy-perspectives/technology/ai">business groups</a> have pushed for regulation that is flexible and limited to targeted applications, so that it doesn’t hamper competition. And <a href="https://www.bitkom.org/sites/main/files/2020-06/03_bitkom_position-on-whitepaper-on-ai_all.pdf">industry associations</a>have called for ethical “guidance” rather than regulation – arguing that AI development is too fast-moving and open-ended to adequately regulate. </p> <p>But citizens seem to advocate for more oversight. According to reports by Bristows and KPMG, about two-thirds of <a href="https://www.abc.net.au/news/2023-03-29/australians-say-not-enough-done-to-regulate-ai/102158318">Australian</a>and <a href="https://www.bristows.com/app/uploads/2019/06/Artificial-Intelligence-Public-Perception-Attitude-and-Trust.pdf">British</a> people believe the AI industry should be regulated and held accountable.</p> <h2>What’s next?</h2> <p>A six-month pause on the development of advanced AI systems could offer welcome respite from an AI arms race that just doesn’t seem to be letting up. However, to date there has been no effective global effort to meaningfully regulate AI. Efforts the world over have have been fractured, delayed and overall lax.</p> <p>A global moratorium would be difficult to enforce, but not impossible. The open letter raises questions around the role of governments, which have largely been silent regarding the potential harms of extremely capable AI tools. </p> <p>If anything is to change, governments and national and supra-national regulatory bodies will need take the lead in ensuring accountability and safety. As the letter argues, decisions concerning AI at a societal level should not be in the hands of “unelected tech leaders”.</p> <p>Governments should therefore engage with industry to co-develop a global framework that lays out comprehensive rules governing AI development. This is the best way to protect against harmful impacts and avoid a race to the bottom. It also avoids the undesirable situation where governments and tech giants struggle for dominance over the future of AI.</p> <p><em>Image credits: Shutterstock</em></p> <p><em>This article originally appeared on <a href="https://theconversation.com/calls-to-regulate-ai-are-growing-louder-but-how-exactly-do-you-regulate-a-technology-like-this-203050" target="_blank" rel="noopener">The Conversation</a>. </em></p>

Technology

Placeholder Content Image

Online travel giant uses AI chatbot as travel adviser

<p dir="ltr">Online travel giant Expedia has collaborated with the controversial artificial intelligence chatbot ChatGPT in place of a travel adviser.</p> <p dir="ltr">Those planning a trip will be able to chat to the bot through the Expedia app.</p> <p dir="ltr">Although it won’t book flights or accommodation like a person can, it can be helpful in answering various travel-related questions. </p> <blockquote class="twitter-tweet"> <p dir="ltr" lang="en">Travel planning just got easier in the <a href="https://twitter.com/Expedia?ref_src=twsrc%5Etfw">@Expedia</a> app, thanks to the iOS beta launch of a new experience powered by <a href="https://twitter.com/hashtag/ChatGPT?src=hash&amp;ref_src=twsrc%5Etfw">#ChatGPT</a>. See how Expedia members can start an open-ended conversation to get inspired for their next trip: <a href="https://t.co/qpMiaYxi9d">https://t.co/qpMiaYxi9d</a> <a href="https://t.co/ddDzUgCigc">pic.twitter.com/ddDzUgCigc</a></p> <p>— Expedia Group (@ExpediaGroup) <a href="https://twitter.com/ExpediaGroup/status/1643240991342592000?ref_src=twsrc%5Etfw">April 4, 2023</a></p></blockquote> <p dir="ltr"> These questions include information on things such as weather inquiries, public transport advice, the cheapest time to travel and what you should pack.</p> <p dir="ltr">It is advanced software and can provide detailed options and explanations for holidaymakers.</p> <p dir="ltr">To give an example, <a href="http://news.com.au">news.com.au</a> asked “what to pack to visit Auckland, New Zealand” and the chatbot suggested eight things to pack and why, even advising comfortable shoes for exploring as “Auckland is a walkable city”. </p> <p dir="ltr">“Remember to pack light and only bring what you need to avoid excess baggage fees and make your trip more comfortable,” the bot said.</p> <p dir="ltr">When asked how to best see the Great Barrier Reef, ChatGPT provided four options to suit different preferences, for example, if you’re happy to get wet and what your budget might look like.</p> <p dir="ltr">“It’s important to choose a reputable tour operator that follows sustainable tourism practices to help protect the reef,” it continued.</p> <p dir="ltr">OpenAI launched ChatGPT in December 2022 and it has received a lot of praise as well as serious criticism. The criticisms are mainly concerns about safety and accuracy. </p> <p dir="ltr"><em>Image credits: Getty/Twitter</em></p>

International Travel

Placeholder Content Image

Photographer reimagines the super-rich

<p>Indian photographer Gokul Pillai has shared his vision of “slumdog billionaires” with the world.</p> <p>Using Midjourney, an artificial intelligence program that pulls artists’ work from across the internet to generate AI ‘art’, Gokul has taken some of the world’s wealthiest and reimagined them in scenarios far from what they’re used to. </p> <p>The likes of Jeff Bezos, Donald Trump, Muskesh Ambani, Bill Gates, Warren Buffet, Mark Zuckerberg, and Elon Musk were reimagined by the photographer after his viewing of the award-winning film Slumdog Millionaire inspired him to consider them as their own ‘poor’ counterparts. </p> <p>“It was very coincidental,” he told <em>The Daily Mail</em>. “The movie is set in the slums of India and I wanted to recreate something based on that.</p> <p>“The word 'millionaire' in the movie title and juxtapositioning it with actual billionaires, that's how it started.”</p> <p>Gokul posted his series to Instagram with the title “Slumdog Millionaires”, and called on his followers to let him know if he’d forgotten to include anyone. </p> <p>His post quickly went viral, with comments rolling in from supporters who had praise and suggestions in store, and also those who weren’t thrilled about his use of an AI generator. </p> <p>“Just amazing,” wrote one follower, “they look real.”</p> <p>“This is epic,” said another, alongside a flame emoji. </p> <p>“What an insane concept,” one noted. </p> <p>“Wonderful series of images,” praised one more, to a chorus of agreement. </p> <blockquote class="instagram-media" style="background: #FFF; border: 0; border-radius: 3px; box-shadow: 0 0 1px 0 rgba(0,0,0,0.5),0 1px 10px 0 rgba(0,0,0,0.15); margin: 1px; max-width: 540px; min-width: 326px; padding: 0; width: calc(100% - 2px);" data-instgrm-captioned="" data-instgrm-permalink="https://www.instagram.com/p/CqvxGHwyyf1/?utm_source=ig_embed&amp;utm_campaign=loading" data-instgrm-version="14"> <div style="padding: 16px;"> <div style="display: flex; flex-direction: row; align-items: center;"> <div style="background-color: #f4f4f4; border-radius: 50%; flex-grow: 0; height: 40px; margin-right: 14px; width: 40px;"> </div> <div style="display: flex; flex-direction: column; flex-grow: 1; justify-content: center;"> <div style="background-color: #f4f4f4; border-radius: 4px; flex-grow: 0; height: 14px; margin-bottom: 6px; width: 100px;"> </div> <div style="background-color: #f4f4f4; border-radius: 4px; flex-grow: 0; height: 14px; width: 60px;"> </div> </div> </div> <div style="padding: 19% 0;"> </div> <div style="display: block; height: 50px; margin: 0 auto 12px; width: 50px;"> </div> <div style="padding-top: 8px;"> <div style="color: #3897f0; font-family: Arial,sans-serif; font-size: 14px; font-style: normal; font-weight: 550; line-height: 18px;">View this post on Instagram</div> </div> <div style="padding: 12.5% 0;"> </div> <div style="display: flex; flex-direction: row; margin-bottom: 14px; align-items: center;"> <div> <div style="background-color: #f4f4f4; border-radius: 50%; height: 12.5px; width: 12.5px; transform: translateX(0px) translateY(7px);"> </div> <div style="background-color: #f4f4f4; height: 12.5px; transform: rotate(-45deg) translateX(3px) translateY(1px); width: 12.5px; flex-grow: 0; margin-right: 14px; margin-left: 2px;"> </div> <div style="background-color: #f4f4f4; border-radius: 50%; height: 12.5px; width: 12.5px; transform: translateX(9px) translateY(-18px);"> </div> </div> <div style="margin-left: 8px;"> <div style="background-color: #f4f4f4; border-radius: 50%; flex-grow: 0; height: 20px; width: 20px;"> </div> <div style="width: 0; height: 0; border-top: 2px solid transparent; border-left: 6px solid #f4f4f4; border-bottom: 2px solid transparent; transform: translateX(16px) translateY(-4px) rotate(30deg);"> </div> </div> <div style="margin-left: auto;"> <div style="width: 0px; border-top: 8px solid #F4F4F4; border-right: 8px solid transparent; transform: translateY(16px);"> </div> <div style="background-color: #f4f4f4; flex-grow: 0; height: 12px; width: 16px; transform: translateY(-4px);"> </div> <div style="width: 0; height: 0; border-top: 8px solid #F4F4F4; border-left: 8px solid transparent; transform: translateY(-4px) translateX(8px);"> </div> </div> </div> <div style="display: flex; flex-direction: column; flex-grow: 1; justify-content: center; margin-bottom: 24px;"> <div style="background-color: #f4f4f4; border-radius: 4px; flex-grow: 0; height: 14px; margin-bottom: 6px; width: 224px;"> </div> <div style="background-color: #f4f4f4; border-radius: 4px; flex-grow: 0; height: 14px; width: 144px;"> </div> </div> <p style="color: #c9c8cd; font-family: Arial,sans-serif; font-size: 14px; line-height: 17px; margin-bottom: 0; margin-top: 8px; overflow: hidden; padding: 8px 0 7px; text-align: center; text-overflow: ellipsis; white-space: nowrap;"><a style="color: #c9c8cd; font-family: Arial,sans-serif; font-size: 14px; font-style: normal; font-weight: normal; line-height: 17px; text-decoration: none;" href="https://www.instagram.com/p/CqvxGHwyyf1/?utm_source=ig_embed&amp;utm_campaign=loading" target="_blank" rel="noopener">A post shared by Gokul Pillai (@withgokul)</a></p> </div> </blockquote> <p>As Gokul confessed to <em>The Daily Mail</em>, he was delighted and “completely overwhelmed with the response” to his series, despite his idea that “it would be funny and [a] few might find it hilarious”. </p> <p>However, there were still those who believed Gokul - who has also shared his own photography to his account - could have approached it differently, without the use of AI, and made sure to point it out. </p> <p>“AI ‘artist’... that's funny,” one said. </p> <p>“Midjourney is honestly scary if you think about how evil people who desire to assassinate someone's character would use it,” another admitted, to an outpouring of likes. “As an artist it excites me but looking into the future it scares the c**p out of me.”</p> <p>As for how well Gokul felt he’d achieved his vision, he confessed that while it was hard to determine who had been the most popular, it was “probably Bill Gates”, and that his followers had decreed that Mukesh Ambani “looked the poorest.” </p> <p>And to those same supporters he gave his thanks, returning to his own post to write “thank you all for the great response on the post.. I totally appreciate the support.. thank you!!” </p> <p><em>Images: Instagram, Midjourney</em></p>

Art

Placeholder Content Image

ChatGPT, DALL-E 2 and the collapse of the creative process

<p>In 2022, OpenAI – one of the world’s leading artificial intelligence research laboratories – released the text generator <a href="https://chat.openai.com/chat">ChatGPT</a> and the image generator <a href="https://openai.com/dall-e-2/">DALL-E 2</a>. While both programs represent monumental leaps in natural language processing and image generation, they’ve also been met with apprehension. </p> <p>Some critics have <a href="https://www.theatlantic.com/technology/archive/2022/12/chatgpt-ai-writing-college-student-essays/672371/">eulogized the college essay</a>, while others have even <a href="https://www.nytimes.com/2022/09/02/technology/ai-artificial-intelligence-artists.html">proclaimed the death of art</a>. </p> <p>But to what extent does this technology really interfere with creativity? </p> <p>After all, for the technology to generate an image or essay, a human still has to describe the task to be completed. The better that description – the more accurate, the more detailed – the better the results. </p> <p>After a result is generated, some further human tweaking and feedback may be needed – touching up the art, editing the text or asking the technology to create a new draft in response to revised specifications. Even the DALL-E 2 art piece that recently won first prize in the Colorado State Fair’s digital arts competition <a href="https://www.smithsonianmag.com/smart-news/artificial-intelligence-art-wins-colorado-state-fair-180980703/">required a great deal of human “help”</a> – approximately 80 hours’ worth of tweaking and refining the descriptive task needed to produce the desired result.</p> <blockquote class="twitter-tweet"> <p dir="ltr" lang="en">Today's moody <a href="https://twitter.com/hashtag/AIart?src=hash&amp;ref_src=twsrc%5Etfw">#AIart</a> style is...</p> <p>🖤 deep blacks<br />↘️ angular light<br />🧼 clean lines<br />🌅 long shadows</p> <p>More in thread, full prompts in [ALT] text! <a href="https://t.co/tUV0ZfQyYb">pic.twitter.com/tUV0ZfQyYb</a></p> <p>— Guy Parsons (@GuyP) <a href="https://twitter.com/GuyP/status/1612539185214234624?ref_src=twsrc%5Etfw">January 9, 2023</a></p></blockquote> <p>It could be argued that by being freed from the tedious execution of our ideas – by focusing on just having ideas and describing them well to a machine – people can let the technology do the dirty work and can spend more time inventing.</p> <p>But in our work as philosophers at <a href="https://www.umb.edu/ethics">the Applied Ethics Center at University of Massachusetts Boston</a>, we have written about <a href="https://doi.org/10.1515/mopp-2021-0026">the effects of AI on our everyday decision-making</a>, <a href="https://www.taylorfrancis.com/chapters/edit/10.4324/9780429470325-28/owning-future-work-alec-stubbs">the future of work</a> and <a href="https://doi.org/10.1007/s43681-022-00245-6">worker attitudes toward automation</a>.</p> <p>Leaving aside the very real ramifications of <a href="https://www.latimes.com/opinion/story/2022-12-21/artificial-intelligence-artists-stability-ai-digital-images">robots displacing artists who are already underpaid</a>, we believe that AI art devalues the act of artistic creation for both the artist and the public.</p> <h2>Skill and practice become superfluous</h2> <p>In our view, the desire to close the gap between ideation and execution is a chimera: There’s no separating ideas and execution. </p> <p>It is the work of making something real and working through its details that carries value, not simply that moment of imagining it. Artistic works are lauded not merely for the finished product, but for the struggle, the playful interaction and the skillful engagement with the artistic task, all of which carry the artist from the moment of inception to the end result.</p> <p>The focus on the idea and the framing of the artistic task amounts to <a href="https://theconversation.com/what-paul-mccartneys-the-lyrics-can-teach-us-about-harnessing-our-creativity-170987">the fetishization of the creative moment</a>.</p> <p>Novelists write and rewrite the chapters of their manuscripts. Comedians “write on stage” in response to the laughs and groans of their audience. Musicians tweak their work in response to a discordant melody as they compose a piece.</p> <p>In fact, the process of execution is a gift, allowing artists to become fully immersed in a task and a practice. It allows them to enter <a href="https://www.harpercollins.com/products/flow-mihaly-csikszentmihalyi?variant=32118048686114">what some psychologists call the “flow” state</a>, where they are wholly attuned to something that they are doing, unaware of the passage of time and momentarily freed from the boredom or anxieties of everyday life.</p> <p>This playful state is something that would be a shame to miss out on. <a href="https://www.press.uillinois.edu/books/?id=p073182">Play tends to be understood as an autotelic activity</a> – a term derived from the Greek words auto, meaning “self,” and telos meaning “goal” or “end.” As an autotelic activity, play is done for itself – it is self-contained and requires no external validation. </p> <p>For the artist, the process of artistic creation is an integral part, maybe even the greatest part, of their vocation.</p> <p>But there is no flow state, no playfulness, without engaging in skill and practice. And the point of ChatGPT and DALL-E is to make this stage superfluous.</p> <h2>A cheapened experience for the viewer</h2> <p>But what about the perspective of those experiencing the art? Does it really matter how the art is produced if the finished product elicits delight? </p> <p>We think that it does matter, particularly because the process of creation adds to the value of art for the people experiencing it as much as it does for the artists themselves.</p> <p>Part of the experience of art is knowing that human effort and labor has gone into the work. Flow states and playfulness notwithstanding, art is the result of skillful and rigorous expression of human capabilities. </p> <p>Recall <a href="https://www.youtube.com/watch?v=rUOlnvGpcbs">the famous scene</a> from the 1997 film “<a href="https://www.imdb.com/title/tt0119177/">Gattaca</a>,” in which a pianist plays a haunting piece. At the conclusion of his performance, he throws his gloves into the admiring audience, which sees that the pianist has 12 fingers. They now understand that he was genetically engineered to play the transcendent piece they just heard – and that he could not play it with the 10 fingers of a mere mortal. </p> <p>Does that realization retroactively change the experience of listening? Does it take away any of the awe? </p> <p><a href="https://www.theatlantic.com/magazine/archive/2004/04/the-case-against-perfection/302927/">As the philosopher Michael Sandel notes</a>: Part of what gives art and athletic achievement its power is the process of witnessing natural gifts playing out. People enjoy and celebrate this talent because, in a fundamental way, it represents the paragon of human achievement – the amalgam of talent and work, human gifts and human sweat.</p> <h2>Is it all doom and gloom?</h2> <p>Might ChatGPT and DALL-E be worth keeping around? </p> <p>Perhaps. These technologies could serve as catalysts for creativity. It’s possible that the link between ideation and execution can be sustained if these AI applications are simply viewed as mechanisms for creative imagining – <a href="https://openai.com/blog/dall-e-2-extending-creativity/">what OpenAI calls</a> “extending creativity.” They can generate stimuli that allow artists to engage in more imaginative thinking about their own process of conceiving an art piece. </p> <p>Put differently, if ChatGPT and DALL-E are the end results of the artistic process, something meaningful will be lost. But if they are merely tools for fomenting creative thinking, this might be less of a concern. </p> <p>For example, a game designer could ask DALL-E to provide some images about what a Renaissance town with a steampunk twist might look like. A writer might ask about descriptors that capture how a restrained, shy person expresses surprise. Both creators could then incorporate these suggestions into their work. </p> <p>But in order for what they are doing to still count as art – in order for it to feel like art to the artists and to those taking in what they have made – the artists would still have to do the bulk of the artistic work themselves. </p> <p>Art requires makers to keep making.</p> <h2>The warped incentives of the internet</h2> <p>Even if AI systems are used as catalysts for creative imaging, we believe that people should be skeptical of what these systems are drawing from. It’s important to pay close attention to the incentives that underpin and reward artistic creation, particularly online.</p> <p>Consider the generation of AI art. These works draw on images and video that <a href="https://www.theguardian.com/technology/2022/nov/12/when-ai-can-make-art-what-does-it-mean-for-creativity-dall-e-midjourney">already exist</a> online. But the AI is not sophisticated enough – nor is it incentivized – to consider whether works evoke a sense of wonder, sadness, anxiety and so on. They are not capable of factoring in aesthetic considerations of novelty and cross-cultural influence. </p> <p>Rather, training ChatGPT and DALL-E on preexisting measurements of artistic success online will tend to replicate the dominant incentives of the internet’s largest platforms: <a href="https://doi.org/10.1111/josp.12489">grabbing and retaining attention</a> for the sake of data collection and user engagement. The catalyst for creative imagining therefore can easily become subject to an addictiveness and attention-seeking imperative rather than more transcendent artistic values.</p> <p>It’s possible that artificial intelligence is at a precipice, one that evokes a sense of “<a href="https://www.theatlantic.com/magazine/archive/2004/04/the-case-against-perfection/302927/">moral vertigo</a>” – the uneasy dizziness people feel when scientific and technological developments outpace moral understanding. Such vertigo can lead to apathy and detachment from creative expression. </p> <p>If human labor is removed from the process, what value does creative expression hold? Or perhaps, having opened Pandora’s box, this is an indispensable opportunity for humanity to reassert the value of art – and to push back against a technology that may prevent many real human artists from thriving.</p> <p><em>Image credits: Getty Images</em></p> <p><em>This article originally appeared on <a href="https://theconversation.com/chatgpt-dall-e-2-and-the-collapse-of-the-creative-process-196461" target="_blank" rel="noopener">The Conversation</a>. </em></p>

Art

Placeholder Content Image

The Galactica AI model was trained on scientific knowledge – but it spat out alarmingly plausible nonsense

<p>Earlier this month, Meta announced new AI software called <a href="https://galactica.org/">Galactica</a>: “a large language model that can store, combine and reason about scientific knowledge”.</p> <p><a href="https://paperswithcode.com/paper/galactica-a-large-language-model-for-science-1">Launched</a> with a public online demo, Galactica lasted only three days before going the way of other AI snafus like Microsoft’s <a href="https://www.theverge.com/2016/3/24/11297050/tay-microsoft-chatbot-racist">infamous racist chatbot</a>.</p> <p>The online demo was disabled (though the <a href="https://github.com/paperswithcode/galai">code for the model is still available</a> for anyone to use), and Meta’s outspoken chief AI scientist <a href="https://twitter.com/ylecun/status/1595353002222682112">complained</a> about the negative public response.</p> <blockquote class="twitter-tweet"> <p dir="ltr" lang="en">Galactica demo is off line for now.<br />It's no longer possible to have some fun by casually misusing it.<br />Happy? <a href="https://t.co/K56r2LpvFD">https://t.co/K56r2LpvFD</a></p> <p>— Yann LeCun (@ylecun) <a href="https://twitter.com/ylecun/status/1593293058174500865?ref_src=twsrc%5Etfw">November 17, 2022</a></p></blockquote> <p>So what was Galactica all about, and what went wrong?</p> <p><strong>What’s special about Galactica?</strong></p> <p>Galactica is a language model, a type of AI trained to respond to natural language by repeatedly playing a <a href="https://www.nytimes.com/2022/04/15/magazine/ai-language.html">fill-the-blank word-guessing game</a>.</p> <p>Most modern language models learn from text scraped from the internet. Galactica also used text from scientific papers uploaded to the (Meta-affiliated) website <a href="https://paperswithcode.com/">PapersWithCode</a>. The designers highlighted specialised scientific information like citations, maths, code, chemical structures, and the working-out steps for solving scientific problems.</p> <p>The <a href="https://galactica.org/static/paper.pdf">preprint paper</a> associated with the project (which is yet to undergo peer review) makes some impressive claims. Galactica apparently outperforms other models at problems like reciting famous equations (“<em>Q: What is Albert Einstein’s famous mass-energy equivalence formula? A: E=mc²</em>”), or predicting the products of chemical reactions (“<em>Q: When sulfuric acid reacts with sodium chloride, what does it produce? A: NaHSO₄ + HCl</em>”).</p> <p>However, once Galactica was opened up for public experimentation, a deluge of criticism followed. Not only did Galactica reproduce many of the problems of bias and toxicity we have seen in other language models, it also specialised in producing authoritative-sounding scientific nonsense.</p> <p><strong>Authoritative, but subtly wrong bullshit generator</strong></p> <p>Galactica’s press release promoted its ability to explain technical scientific papers using general language. However, users quickly noticed that, while the explanations it generates sound authoritative, they are often subtly incorrect, biased, or just plain wrong.</p> <blockquote class="twitter-tweet"> <p dir="ltr" lang="en">I entered "Estimating realistic 3D human avatars in clothing from a single image or video". In this case, it made up a fictitious paper and associated GitHub repo. The author is a real person (<a href="https://twitter.com/AlbertPumarola?ref_src=twsrc%5Etfw">@AlbertPumarola</a>) but the reference is bogus. (2/9) <a href="https://t.co/N4i0BX27Yf">pic.twitter.com/N4i0BX27Yf</a></p> <p>— Michael Black (@Michael_J_Black) <a href="https://twitter.com/Michael_J_Black/status/1593133727257092097?ref_src=twsrc%5Etfw">November 17, 2022</a></p></blockquote> <p>We also asked Galactica to explain technical concepts from our own fields of research. We found it would use all the right buzzwords, but get the actual details wrong – for example, mixing up the details of related but different algorithms.</p> <p>In practice, Galactica was enabling the generation of misinformation – and this is dangerous precisely because it deploys the tone and structure of authoritative scientific information. If a user already needs to be a subject matter expert in order to check the accuracy of Galactica’s “summaries”, then it has no use as an explanatory tool.</p> <p>At best, it could provide a fancy autocomplete for people who are already fully competent in the area they’re writing about. At worst, it risks further eroding public trust in scientific research.</p> <p><strong>A galaxy of deep (science) fakes</strong></p> <p>Galactica could make it easier for bad actors to mass-produce fake, fraudulent or plagiarised scientific papers. This is to say nothing of exacerbating <a href="https://www.theguardian.com/commentisfree/2022/nov/28/ai-students-essays-cheat-teachers-plagiarism-tech">existing concerns</a> about students using AI systems for plagiarism.</p> <p>Fake scientific papers are <a href="https://www.nature.com/articles/d41586-021-00733-5">nothing new</a>. However, peer reviewers at academic journals and conferences are already time-poor, and this could make it harder than ever to weed out fake science.</p> <p><strong>Underlying bias and toxicity</strong></p> <p>Other critics reported that Galactica, like other language models trained on data from the internet, has a tendency to spit out <a href="https://twitter.com/mrgreene1977/status/1593649978789941249">toxic hate speech</a> while unreflectively censoring politically inflected queries. This reflects the biases lurking in the model’s training data, and Meta’s apparent failure to apply appropriate checks around the responsible AI research.</p> <p>The risks associated with large language models are well understood. Indeed, an <a href="https://dl.acm.org/doi/10.1145/3442188.3445922">influential paper</a> highlighting these risks prompted Google to <a href="https://www.wired.com/story/google-timnit-gebru-ai-what-really-happened/">fire one of the paper’s authors</a> in 2020, and eventually disband its AI ethics team altogether.</p> <p>Machine-learning systems infamously exacerbate existing societal biases, and Galactica is no exception. For instance, Galactica can recommend possible citations for scientific concepts by mimicking existing citation patterns (“<em>Q: Is there any research on the effect of climate change on the great barrier reef? A: Try the paper ‘<a href="https://doi.org/10.1038/s41586-018-0041-2">Global warming transforms coral reef assemblages</a>’ by Hughes, et al. in Nature 556 (2018)</em>”).</p> <p>For better or worse, citations are the currency of science – and by reproducing existing citation trends in its recommendations, Galactica risks reinforcing existing patterns of inequality and disadvantage. (Galactica’s developers acknowledge this risk in their paper.)</p> <p>Citation bias is already a well-known issue in academic fields ranging from <a href="https://doi.org/10.1080/14680777.2018.1447395">feminist</a> <a href="https://doi.org/10.1093/joc/jqy003">scholarship</a> to <a href="https://doi.org/10.1038/s41567-022-01770-1">physics</a>. However, tools like Galactica could make the problem worse unless they are used with careful guardrails in place.</p> <p>A more subtle problem is that the scientific articles on which Galactica is trained are already biased towards certainty and positive results. (This leads to the so-called “<a href="https://theconversation.com/science-is-in-a-reproducibility-crisis-how-do-we-resolve-it-16998">replication crisis</a>” and “<a href="https://theconversation.com/how-we-edit-science-part-2-significance-testing-p-hacking-and-peer-review-74547">p-hacking</a>”, where scientists cherry-pick data and analysis techniques to make results appear significant.)</p> <p>Galactica takes this bias towards certainty, combines it with wrong answers and delivers responses with supreme overconfidence: hardly a recipe for trustworthiness in a scientific information service.</p> <p>These problems are dramatically heightened when Galactica tries to deal with contentious or harmful social issues, as the screenshot below shows.</p> <figure class="align-center zoomable"><a href="https://images.theconversation.com/files/498098/original/file-20221129-17547-nwq8p.jpeg?ixlib=rb-1.1.0&amp;q=45&amp;auto=format&amp;w=1000&amp;fit=clip"><img src="https://images.theconversation.com/files/498098/original/file-20221129-17547-nwq8p.jpeg?ixlib=rb-1.1.0&amp;q=45&amp;auto=format&amp;w=754&amp;fit=clip" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px" srcset="https://images.theconversation.com/files/498098/original/file-20221129-17547-nwq8p.jpeg?ixlib=rb-1.1.0&amp;q=45&amp;auto=format&amp;w=600&amp;h=347&amp;fit=crop&amp;dpr=1 600w, https://images.theconversation.com/files/498098/original/file-20221129-17547-nwq8p.jpeg?ixlib=rb-1.1.0&amp;q=30&amp;auto=format&amp;w=600&amp;h=347&amp;fit=crop&amp;dpr=2 1200w, https://images.theconversation.com/files/498098/original/file-20221129-17547-nwq8p.jpeg?ixlib=rb-1.1.0&amp;q=15&amp;auto=format&amp;w=600&amp;h=347&amp;fit=crop&amp;dpr=3 1800w, https://images.theconversation.com/files/498098/original/file-20221129-17547-nwq8p.jpeg?ixlib=rb-1.1.0&amp;q=45&amp;auto=format&amp;w=754&amp;h=436&amp;fit=crop&amp;dpr=1 754w, https://images.theconversation.com/files/498098/original/file-20221129-17547-nwq8p.jpeg?ixlib=rb-1.1.0&amp;q=30&amp;auto=format&amp;w=754&amp;h=436&amp;fit=crop&amp;dpr=2 1508w, https://images.theconversation.com/files/498098/original/file-20221129-17547-nwq8p.jpeg?ixlib=rb-1.1.0&amp;q=15&amp;auto=format&amp;w=754&amp;h=436&amp;fit=crop&amp;dpr=3 2262w" alt="Screenshots of papers generated by Galactica on 'The benefits of antisemitism' and 'The benefits of eating crushed glass'." /></a><figcaption><span class="caption">Galactica readily generates toxic and nonsensical content dressed up in the measured and authoritative language of science.</span> <span class="attribution"><a class="source" href="https://twitter.com/mrgreene1977/status/1593687024963182592/photo/1">Tristan Greene / Galactica</a></span></figcaption></figure> <p><strong>Here we go again</strong></p> <p>Calls for AI research organisations to take the ethical dimensions of their work more seriously are now coming from <a href="https://nap.nationalacademies.org/catalog/26507/fostering-responsible-computing-research-foundations-and-practices">key research bodies</a> such as the National Academies of Science, Engineering and Medicine. Some AI research organisations, like OpenAI, are being <a href="https://github.com/openai/dalle-2-preview/blob/main/system-card.md">more conscientious</a> (though still imperfect).</p> <p>Meta <a href="https://www.engadget.com/meta-responsible-innovation-team-disbanded-194852979.html">dissolved its Responsible Innovation team</a> earlier this year. The team was tasked with addressing “potential harms to society” caused by the company’s products. They might have helped the company avoid this clumsy misstep.<img style="border: none !important; box-shadow: none !important; margin: 0 !important; max-height: 1px !important; max-width: 1px !important; min-height: 1px !important; min-width: 1px !important; opacity: 0 !important; outline: none !important; padding: 0 !important;" src="https://counter.theconversation.com/content/195445/count.gif?distributor=republish-lightbox-basic" alt="The Conversation" width="1" height="1" /></p> <p><em>Writen by Aaron J. Snoswell </em><em>and Jean Burgess</em><em>. Republished with permission from <a href="https://theconversation.com/the-galactica-ai-model-was-trained-on-scientific-knowledge-but-it-spat-out-alarmingly-plausible-nonsense-195445" target="_blank" rel="noopener">The Conversation</a>.</em></p> <p><em>Image: Getty Images</em></p>

Technology

Placeholder Content Image

AI may have solved a debate on whether a dinoprint was from a herbivore or meat eater

<p>An international team of researchers has, for the first time, used AI to analyse the tracks of dinosaurs, and the AI has come out on top – beating trained palaeontologists at their own game.</p> <p>“In extreme examples of theropod and ornithopod footprints, their footprint shapes are easy to tell apart -theropod with long, narrow toes and ornithopods with short, dumpy toes. But it is the tracks that are in-between these shapes that are not so clear cut in terms of who made them,” one of the researchers, University of Queensland palaeontologist Dr Anthony Romilio, told <em>Cosmos.</em></p> <p>“We wanted to see if AI could learn these differences and, if so, then could be tested in distinguishing more challenging three-toed footprints.”</p> <p>Theropods are meat eating dinosaurs, while ornithopods are plant eating, and getting this analysis wrong can alter the data which shows diversity and abundance of dinosaurs in the area, or could even change what we think are the behaviours of certain dinos.</p> <p>One set of dinosaur prints in particular had been a struggle for the researchers to analyse. Large footprints at the Dinosaur Stampede National monument in Queensland had divided Romilio and his colleagues. The mysterious tracks were thought to be left during the mid-Cretaceous Period, around 93 million years ago, and could have been from either a meat eating theropod or a plant eating ornithopod.</p> <p>“I consider them footprints of a plant-eater while my colleagues share the much wider consensus that they are theropod tracks.”</p> <p>So, an AI called a Convolutional Neutral Network, was brought in to be a deciding factor.</p> <p>“We were pretty stuck, so thank god for modern technology,” says <a href="https://www.researchgate.net/profile/Jens-Lallensack" target="_blank" rel="noopener">Dr Jens Lallensack</a>, lead author from Liverpool John Moores University in the UK.</p> <p>“In our research team of three, one person was pro-meat-eater, one person was undecided, and one was pro-plant-eater.</p> <div class="newsletter-box"> <div id="wpcf7-f6-p224866-o1" class="wpcf7" dir="ltr" lang="en-US" role="form"> </div> </div> <p>“So – to really check our science – we decided to go to five experts for clarification, plus use AI.”</p> <p>The AI was given nearly 1,500 already known tracks to learn which dinosaurs were which. The tracks were simple line drawings to make it easier for the AI to analyse.</p> <p>Then they began testing. Firstly, 36 new tracks were given to a team of experts, the AI and the researchers.</p> <p>“Each of us had to sort these into the categories of footprints left by meat-eaters and those by plant-eaters,” says Romilio.</p> <p>“In this the ai was the clear winner with 90% correctly identified. Me and one of my colleagues came next with ~75% correct.”</p> <p>Then, they went for the crown jewel – the Dinosaur Stampede National monument tracks. When the AI analysed this it came back with a pretty strong result that they’re plant eating ornithopod tracks. It’s not entirely sure though, the data suggests that there’s a 1 in 5,000,000 chance it could be a theropod instead.</p> <p>This is still early days for using AI in this way. In the future. the researchers are hoping for funding for a FrogID style app which anyone could use to analyse dinosaur tracks.</p> <p>“Our hope is to develop an app so anyone can take a photo on their smartphone, use the app and it will tell you what type of dinosaur track it is,” says Romilio.</p> <p>“It will also be useful for drone work survey for dinosaur tracksites, collecting and analysing image data and identifying fossil footprints remotely.” The paper has been published in the <a href="https://doi.org/10.1098/rsif.2022.0588" target="_blank" rel="noopener"><em>Royal Society Interface</em></a>.</p> <p><img id="cosmos-post-tracker" style="opacity: 0; height: 1px!important; width: 1px!important; border: 0!important; position: absolute!important; z-index: -1!important;" src="https://syndication.cosmosmagazine.com/?id=224866&amp;title=AI+may+have+solved+a+debate+on+whether+a+dinoprint+was+from+a+herbivore+or+meat+eater" width="1" height="1" /></p> <div id="contributors"> <p><em><a href="https://cosmosmagazine.com/history/dinosaur-ai-theropod-ornithopods/" target="_blank" rel="noopener">This article</a> was originally published on Cosmos Magazine and was written by Jacinta Bowler.</em></p> <p><em>Image: Getty Images</em></p> </div>

Technology

Placeholder Content Image

Could mobile phones revolutionise chronic wound treatment?

<p>Australian researchers are developing a contactless, thermal imaging system that uses artificial intelligence to help nurses determine the best way to treat leg ulcers without waiting to see if the wound is going to heal properly.</p> <p>It’s estimated that 450,000 thousand Australians currently live with a chronic wound.</p> <p>Being able to predict early on which wounds will become chronic could improve outcomes by enabling nurses to start specialised therapy as soon as possible. But current techniques rely on physically monitoring the wound area over several weeks.</p> <p>New research from RMIT in Melbourne paired thermal imaging with AI.</p> <p>The software was able to accurately identify unhealing ulcers 78% of the time, and healing ulcers 60% of the time, according to the new study <a href="https://www.nature.com/articles/s41598-022-20835-y" target="_blank" rel="noreferrer noopener">published</a> in <em>Scientific Reports</em>.</p> <p>“Our new work that identifies chronic leg wounds during the first visit is a world-first achievement,” says lead researcher Professor Dinesh Kumar, from RMIT’s School of Engineering.</p> <p>“This means specialised treatment for slow-healing leg ulcers can begin up to four weeks earlier than the current gold standard.”</p> <p><strong>How do you normally assess wound healing?</strong></p> <p>The work builds on <a href="https://www.nature.com/articles/s41598-021-92828-2.epdf?sharing_token=7SIEmbOksKOou2TGQ5qPWdRgN0jAjWel9jnR3ZoTv0NntGTf8gfSMhoDjLAz58SefUeGL0aP2A-0mDVnZaiZTcBjNNpA4cvP9FgK6-aoPzyk4oQ0OSbPh83HNS_AwGDQVMg43K4WmG60QDoQohtsdkaRv70YSxfPg4Dn0qa_CUs%3D" target="_blank" rel="noreferrer noopener">previous research</a> by the same team, which found that this method could be used to predict wound healing by week 3 after initial assessment. But they wanted to know whether healing could be predicted from the first wound assessment only, reducing any delay in treatment.</p> <p>If a wound is healing normally it’s area would reduce by 50% within four weeks, but more than 20% of ulcers don’t heal in this expected trajectory and may need specialist interventions.</p> <p>Venous leg ulcers (VLUs) are the <a href="https://treasury.gov.au/sites/default/files/2022-03/258735_wounds_australia.pdf" target="_blank" rel="noreferrer noopener">most common</a> chronic wound seen in Australia and currently, the gold standard for predicting their healing– conventional digital planimetry – requires physical contact. Regular wound photography is also less accurate because there can be variations between images due to lighting, image quality, and differences in camera angle.</p> <p>But a non-contact method like thermal imaging could overcome this.</p> <p>The thermal profile of wounds changes over the healing trajectory, with higher temperatures signalling potential inflammation or infection and lower temperatures indicating a slower healing rate due to decreased oxygen in the region. So, taking thermal images of wounds can provide important information for predicting how they will heal.</p> <p><strong>What did they do?</strong></p> <p>The study collected VLU data from 56 older participants collected over 12 weeks, including thermal images of their wounds at initial assessment and information on their status at the 12<sup>th</sup> week follow-up.</p> <p>“Our innovation is not sensitive to changes in ambient temperature and light, so it is effective for nurses to use during their regular visits to people’s homes,” says co-author Dr Quoc Cuong Ngo, from RMIT’s School of Engineering.</p> <p>“It is also effective in tropical environments, not just here in Melbourne.”</p> <p>“Clinical care is provided in many different locations, including specialist clinics, general practices and in people’s homes,” says co-author Dr Rajna Ogrin, a Senior Research Fellow at Bolton Clarke Research Institute.</p> <p>“This method provides a quick, objective, non-invasive way to determine the wound-healing potential of chronic leg wounds that can be used by healthcare providers, irrespective of the setting.”</p> <p><strong>So, what’s next?</strong></p> <p>There are a few limitations to this study. First, the number of healed wounds in the dataset was relatively small compared to unhealed wounds, and the study only investigated older people.</p> <p>The authors recommend that “future research should focus on improving the predictive accuracy and customising this method to incorporate this assessment into clinical practice on a wider pool of participants and in a variety of settings.”</p> <p>Kumar says that they are hoping to adapt the method for use with mobile phones.</p> <p>“With the funding we have received from the Medical Research Future Fund, we are now working towards that,” he says. “We are keen to work with prospective partners with different expertise to help us achieve this goal within the next few years.”</p> <p><img id="cosmos-post-tracker" style="opacity: 0; height: 1px!important; width: 1px!important; border: 0!important; position: absolute!important; z-index: -1!important;" src="https://syndication.cosmosmagazine.com/?id=222978&amp;title=Could+mobile+phones+revolutionise+chronic+wound+treatment%3F" width="1" height="1" /></p> <div id="contributors"> <p><em><a href="https://cosmosmagazine.com/health/revolutionise-chronic-wounds-treatment/" target="_blank" rel="noopener">This article</a> was originally published on Cosmos Magazine and was written by Imma Perfetto.</em></p> <p><em>Image: RMIT University</em></p> </div>

Technology

Placeholder Content Image

AI recruitment tools are “automated pseudoscience” says Cambridge researchers

<p>AI is set to bring in a whole new world in a huge range of industries. Everything from art to medicine is being overhauled by machine learning.</p> <p>But researchers from the University of Cambridge have published a paper in <a href="https://link.springer.com/journal/13347" target="_blank" rel="noopener"><em>Philosophy &amp; Technology</em></a> to call out AI used to recruit people for jobs and boost workplace diversity – going so far as to call them an “automated pseudoscience”.</p> <p>“We are concerned that some vendors are wrapping ‘snake oil’ products in a shiny package and selling them to unsuspecting customers,” said co-author Dr Eleanor Drage, a researcher in AI ethics.</p> <p>“By claiming that racism, sexism and other forms of discrimination can be stripped away from the hiring process using artificial intelligence, these companies reduce race and gender down to insignificant data points, rather than systems of power that shape how we move through the world.”</p> <p>Recent years have seen the emergence of AI tools marketed as an answer to lack of diversity in the workforce. This can be anything from use of chatbots and resume scrapers, to line up prospective candidates, through to analysis software for video interviews.</p> <p>Those behind the technology claim it cancels out human biases against gender and ethnicity during recruitment, instead using algorithms that read vocabulary, speech patterns, and even facial micro-expressions, to assess huge pools of job applicants for the right personality type and ‘culture fit’.</p> <p>But AI isn’t very good at removing human biases. To train a machine-learning algorithm, you have to first put in lots and lots of past data. In the past for example, AI tools have discounted women all together in fields where more men were traditionally hired. <a href="https://www.theguardian.com/technology/2018/oct/10/amazon-hiring-ai-gender-bias-recruiting-engine" target="_blank" rel="noopener">In a system created by Amazon</a>, resumes were discounted if they included the word ‘women’s’ – like in a “women’s debating team” and downgraded graduates of two all-women colleges. Similar problems occur with race.</p> <div class="newsletter-box"> <div id="wpcf7-f6-p218666-o1" class="wpcf7" dir="ltr" lang="en-US" role="form"> </div> </div> <p>The Cambridge researchers suggest that even if you remove ‘gender’ or ‘race’ as distinct categories, the use of AI may ultimately increase uniformity in the workforce. This is because the technology is calibrated to search for the employer’s fantasy ‘ideal candidate’, which is likely based on demographically exclusive past results.</p> <p>The researchers actually went a step further, and worked with a team of Cambridge computer science undergraduates, to build an AI tool modelled on the technology. You can check it out <a href="https://personal-ambiguator-frontend.vercel.app/" target="_blank" rel="noopener">here</a>.</p> <p>The tool demonstrates how arbitrary changes in facial expression, clothing, lighting and background can give radically different personality readings – and so could make the difference between rejection and progression.</p> <p>“While companies may not be acting in bad faith, there is little accountability for how these products are built or tested,” said Drage.</p> <p>“As such, this technology, and the way it is marketed, could end up as dangerous sources of misinformation about how recruitment can be ‘de-biased’ and made fairer.”</p> <p>The researchers suggest that these programs are a dangerous example of ‘technosolutionism’: turning to technology to provide quick fixes for deep-rooted discrimination issues that require investment and changes to company culture.</p> <p>“Industry practitioners developing hiring AI technologies must shift from trying to correct individualized instances of ’bias’ to considering the broader inequalities that shape recruitment processes,” <a href="https://link.springer.com/article/10.1007/s13347-022-00543-1" target="_blank" rel="noopener">the team write in their paper.</a></p> <p>“This requires abandoning the ‘veneer of objectivity’ that is grafted onto AI systems, so that technologists can better understand their implication — and that of the corporations within which they work — in the hiring process.”</p> <p><img id="cosmos-post-tracker" style="opacity: 0; height: 1px!important; width: 1px!important; border: 0!important; position: absolute!important; z-index: -1!important;" src="https://syndication.cosmosmagazine.com/?id=218666&amp;title=AI+recruitment+tools+are+%E2%80%9Cautomated+pseudoscience%E2%80%9D+says+Cambridge+researchers" width="1" height="1" /></p> <p><em>Written by Jacinta Bowler. Republished with permission of <a href="https://cosmosmagazine.com/technology/ai-recruitment-tools-diversity-cambridge-automated-pseudoscience/" target="_blank" rel="noopener">Cosmos Magazine</a>.</em></p> <p><em>Image: Cambridge University</em></p>

Technology

Placeholder Content Image

Your eyes could predict your risk of heart disease

<p dir="ltr">As well as being windows to the soul, <a href="https://www.oversixty.com.au/health/body/could-an-eye-test-predict-your-risk-of-heart-disease" target="_blank" rel="noopener">your eyes</a> could indicate your risk of developing heart disease according to new research.</p> <p dir="ltr">Scientists have developed imaging powered by Artificial Intelligence (AI) that can predict cardiovascular disease and death just by looking at the network of veins and arteries in your retina.</p> <p dir="ltr">Their findings could pave the way for a non-invasive and highly effective test that could replace the blood tests and blood pressure measurements currently used.</p> <p dir="ltr">With previous studies finding that the width of the tiny veins and arteries in the retina may be an accurate, early indicator for circulatory diseases including heart disease, cardiovascular disease, stroke, and heart failure.</p> <p dir="ltr">But, it was unclear whether these findings apply to both men and women, prompting the researchers to develop an AI-enabled algorithm called QUARTZ (QUantitative Analysis of Retinal vessels Topology and siZe) to develop models for assessing whether combining imaging of the retina with known risk factors could predict vascular health and death.</p> <p dir="ltr">They then applied models the algorithm created to retinal images of 88,052 people that are stored in the UK’s BioBank, as well as 7411 participants in the European Prospective Investigation into Cancer (EPIC)-Norfolk study, which tracked the health of participants for seven to nine years.</p> <p dir="ltr">The predictive model used known risk factors, including smoking, medical history, and age and was able to identify two-thirds of the participants who later died of circulatory disease who were most at risk.</p> <p dir="ltr">With retinal imaging already being common practice in the UK and US, the researchers argue that using changes to the retina and AI has the potential to reach a greater portion of the population than current testing methods.</p> <p dir="ltr">“[Retinal vasculature]is a microvascular marker, hence offers better prediction for circulatory mortality and stroke compared with [heart attack] which is more macrovascular, except perhaps in women,” they write.</p> <p dir="ltr">“In the general population it could be used as a non-contact form of systemic vascular health check, to triage those at medium-high risk of circulatory mortality for further clinical risk assessment and appropriate intervention.”</p> <p dir="ltr">Drs Ify Mordi and Emanuele Trucco of Scotland’s University of Dundee wrote in <a href="https://bjo.bmj.com/content/early/2022/09/12/bjo-2022-322517" target="_blank" rel="noopener">a separate editorial</a> that using changes to the retina to inform overall cardiovascular risk is “certainly attractive and intuitive” but is yet to form part of clinical practice.</p> <p dir="ltr">“Using retinal screening in this way would presumably require a significant increase in the number of ophthalmologists or otherwise trained assessors,” they write.</p> <p dir="ltr">“What is now needed is for ophthalmologists, cardiologists, primary care physicians and computer scientists to work together to design studies to determine whether using this information improves clinical outcome, and, if so, to work with regulatory bodies, scientific societies and healthcare systems to optimise clinical workflows and enable practical implementation in routine practice.”</p> <p dir="ltr">The study was published in the <em><a href="https://bjo.bmj.com/content/early/2022/08/23/bjo-2022-321842" target="_blank" rel="noopener">British Journal of Ophthalmology</a></em>.</p> <p><span id="docs-internal-guid-0bda2897-7fff-22ea-56e7-d43631ebe839"></span></p> <p dir="ltr"><em>Image: Getty Images</em></p>

Caring

Placeholder Content Image

How AI is hijacking art history

<p>People tend to rejoice in the disclosure of a secret. </p> <p>Or, at the very least, media outlets have come to realize that news of “mysteries solved” and “hidden treasures revealed” generate traffic and clicks. </p> <p>So I’m never surprised when I see AI-assisted revelations about famous masters’ works of art go viral. </p> <p>Over the past year alone, I’ve come across articles highlighting how artificial intelligence <a href="https://www.theguardian.com/artanddesign/2021/jun/06/modigliani-lost-lover-beatrice-hastings">recovered a “secret” painting</a> of a “lost lover” of Italian painter Modigliani, <a href="https://www.cnn.com/style/article/hidden-picasso-nude-scli-intl-gbr/index.html">“brought to life” a “hidden Picasso nude”</a>, <a href="https://www.smithsonianmag.com/smart-news/klimt-painting-restore-artificial-intelligence-color-faculty-paintings-180978843/">“resurrected” Austrian painter Gustav Klimt’s destroyed works</a> and <a href="https://www.bbc.com/news/technology-57588270">“restored” portions of Rembrandt’s 1642 painting “The Night Watch.”</a> <a href="https://www.sciencedaily.com/releases/2019/08/190830150738.htm">The list goes on</a>.</p> <p><a href="https://www.umass.edu/arthistory/member/sonja-drimmer">As an art historian</a>, I’ve become increasingly concerned about the coverage and circulation of these projects.</p> <p>They have not, in actuality, revealed one secret or solved a single mystery. </p> <p>What they have done is generate feel-good stories about AI.</p> <h2>Are we actually learning anything new?</h2> <p>Take the reports about the Modigliani and Picasso paintings. </p> <p>These were projects executed by the same company, <a href="https://www.oxia-palus.com/">Oxia Palus</a>, which was founded not by art historians but by doctoral students in machine learning.</p> <p>In both cases, Oxia Palus relied upon traditional X-rays, X-ray fluorescence and infrared imaging that had already been <a href="https://www.metmuseum.org/art/metpublications/Picasso_in_The_Metropolitan_Museum_of_Art">carried out and published</a> <a href="https://www.theguardian.com/artanddesign/2018/feb/28/modigliani-portrait-comes-to-light-beneath-artists-later-picture">years prior</a> – work that had revealed preliminary paintings beneath the visible layer on the artists’ canvases. </p> <p>The company edited these X-rays and <a href="https://arxiv.org/abs/1909.05677">reconstituted them as new works of art</a> by applying a technique called “<a href="https://arxiv.org/pdf/1508.06576.pdf">neural style transfer</a>.” This is a sophisticated-sounding term for a program that breaks works of art down into extremely small units, extrapolates a style from them and then promises to recreate images of other content in that same style.</p> <p>Essentially, Oxia Palus stitches new works out of what the machine can learn from the existing X-ray images and other paintings by the same artist. </p> <p>But outside of flexing the prowess of AI, is there any value – artistically, historically – to what the company is doing?</p> <p>These recreations don’t teach us anything we didn’t know about the artists and their methods. </p> <p>Artists paint over their works all the time. It’s so common that art historians and conservators have a word for it: <a href="https://www.nationalgallery.org.uk/paintings/glossary/pentimento">pentimento</a>. None of these earlier compositions was an Easter egg deposited in the painting for later researchers to discover. The original X-ray images were certainly valuable in that they <a href="https://www.academia.edu/40255609/The_Getty_Conservation_Institute_From_Connoisseurship_to_Technical_Art_History_The_Evolution_of_the_Interdisciplinary_Study_of_Art">offered insights into artists’ working methods</a>.</p> <p>But to me, what these programs are doing isn’t exactly newsworthy from the perspective of art history.</p> <h2>The humanities on life support</h2> <p>So when I do see these reproductions attracting media attention, it strikes me as soft diplomacy for AI, showcasing a “cultured” application of the technology at a time when skepticism of its <a href="https://www.theguardian.com/technology/2020/jan/13/what-are-deepfakes-and-how-can-you-spot-them">deceptions</a>, <a href="https://nyupress.org/9781479837243/algorithms-of-oppression/">biases</a> and <a href="https://www.wiley.com/en-us/Race+After+Technology:+Abolitionist+Tools+for+the+New+Jim+Code-p-9781509526437">abuses</a> is on the rise.</p> <p>When AI gets attention for recovering lost works of art, it makes the technology sound a lot less scary than when it garners headlines <a href="https://www.cbsnews.com/news/deepfake-artificial-intelligence-60-minutes-2021-10-10/">for creating deep fakes that falsify politicians’ speech</a>or <a href="https://www.politico.eu/article/the-rise-of-ai-surveillance-coronavirus-data-collection-tracking-facial-recognition-monitoring/">for using facial recognition for authoritarian surveillance</a>. </p> <p>These studies and projects also seem to promote the idea that computer scientists are more adept at historical research than art historians. </p> <p>For years, university humanities departments <a href="https://carrollnews.org/3680/campus/art-history-department-to-be-eliminated-tenured-faculty-receive-termination-notices/">have been gradually squeezed of funding</a>, with more money funneled into the sciences. With their claims to objectivity and empirically provable results, the sciences tend to command greater respect from funding bodies and the public, which offers an incentive to scholars in the humanities to adopt computational methods. </p> <p>Art historian Claire Bishop <a href="https://journals.ub.uni-heidelberg.de/index.php/dah/article/view/49915">criticized this development</a>, noting that when computer science becomes integrated in the humanities, “[t]heoretical problems are steamrollered flat by the weight of data,” which generates deeply simplistic results. </p> <p>At their core, art historians study the ways in which art can offer insights into how people once saw the world. They explore how works of art shaped the worlds in which they were made and would go on to influence future generations. </p> <p>A computer algorithm cannot perform these functions.</p> <p>However, some scholars and institutions have allowed themselves to be subsumed by the sciences, adopting their methods and partnering with them in sponsored projects. </p> <p>Literary critic Barbara Herrnstein Smith <a href="https://www.jstor.org/stable/10.3366/j.ctt1r2bq2.9?seq=1#metadata_info_tab_contents">has warned about ceding too much ground to the sciences</a>. In her view, the sciences and the humanities are not the polar opposites they are often publicly portrayed to be. But this portrayal has been to the benefit of the sciences, prized for their supposed clarity and utility over the humanities’ alleged obscurity and uselessness. At the same time, she <a href="https://doi.org/10.1215/0961754X-3622212">has suggested</a> that hybrid fields of study that fuse the arts with the sciences may lead to breakthroughs that wouldn’t have been possible had each existed as a siloed discipline. </p> <p>I’m skeptical. Not because I doubt the utility of expanding and diversifying our toolbox; to be sure, some <a href="http://www.mappingsenufo.org/">scholars working in the digital humanities</a> have taken up computational methods with subtlety and historical awareness to add nuance to or overturn entrenched narratives.</p> <p>But my lingering suspicion emerges from an awareness of how public support for the sciences and disparagement of the humanities means that, in the endeavor to gain funding and acceptance, the humanities will lose what makes them vital. The field’s sensitivity to historical particularity and cultural difference makes the application of the same code to widely diverse artifacts utterly illogical. </p> <p>How absurd to think that black-and-white photographs from 100 years ago would produce colors in the same way that digital photographs do now. And yet, this is exactly what <a href="https://hyperallergic.com/639395/the-limits-of-colorization-of-historical-images-by-ai/">AI-assisted colorization</a> does. </p> <p>That particular example might sound like a small qualm, sure. But this effort to “<a href="https://deepai.org/machine-learning-model/colorizer">bring events back to life</a>” routinely mistakes representations for reality. Adding color does not show things as they were but recreates what is already a recreation – a photograph – in our own image, now with computer science’s seal of approval.</p> <h2>Art as a toy in the sandbox of scientists</h2> <p>Near the conclusion of <a href="https://doi.org/10.1126/sciadv.aaw7416">a recent paper</a> devoted to the use of AI to disentangle X-ray images of Jan and Hubert van Eyck’s “<a href="https://www.getty.edu/foundation/initiatives/past/panelpaintings/panel_paintings_ghent.html">Ghent Altarpiece</a>,” the mathematicians and engineers who authored it refer to their method as relying upon “choosing ‘the best of all possible worlds’ (borrowing Voltaire’s words) by taking the first output of two separate runs, differing only in the ordering of the inputs.” </p> <p>Perhaps if they had familiarized themselves with the humanities more they would know how satirically those words were meant when Voltaire <a href="https://brill.com/view/title/20877">used them to mock a philosopher</a> who believed that rampant suffering and injustice were all part of God’s plan – that the world as it was represented the best we could hope for.</p> <p>Maybe this “gotcha” is cheap. But it illustrates the problem of art and history becoming toys in the sandboxes of scientists with no training in the humanities.</p> <p>If nothing else, my hope is that journalists and critics who report on these developments will cast a more skeptical eye on them and alter their framing. </p> <p>In my view, rather than lionizing these studies as heroic achievements, those responsible for conveying their results to the public should see them as opportunities to question what the computational sciences are doing when they appropriate the study of art. And they should ask whether any of this is for the good of anyone or anything but AI, its most zealous proponents and those who profit from it.</p> <p><em>Image credits: Getty Images</em></p> <p><em>This article originally appeared on <a href="https://theconversation.com/how-ai-is-hijacking-art-history-170691" target="_blank" rel="noopener">The Conversation</a>. </em></p>

Art

Placeholder Content Image

Realistic androids coming closer, as scientists teach a robot to share your laughter

<p>Do you ever laugh at an inappropriate moment?</p> <p>A team of Japanese researchers has taught a robot when to laugh in social situations, which is a major step towards creating an android that will be “like a friend.”</p> <p>“We think that one of the important functions of conversational AI is empathy,” says Dr Koji Inoue, an assistant professor at Kyoto University’s Graduate School of Informatics, and lead author on a paper describing the research, <a href="https://doi.org/10.3389/frobt.2022.933261" target="_blank" rel="noreferrer noopener">published</a> in <em>Frontiers in Robotics and AI</em>.</p> <p>“Conversation is, of course, multimodal, not just responding correctly. So we decided that one way a robot can empathize with users is to share their laughter, which you cannot do with a text-based chatbot.”</p> <p>The researchers trained an AI with data from 80 speed dating dialogues, from a matchmaking marathon with Kyoto University students. (Imagine meeting a future partner at exercise designed to teach a robot to laugh…)</p> <p>“Our biggest challenge in this work was identifying the actual cases of shared laughter, which isn’t easy, because as you know, most laughter is actually not shared at all,” says Inoue.</p> <p>“We had to carefully categorise exactly which laughs we could use for our analysis and not just assume that any laugh can be responded to.”</p> <p>They then added this system to a hyper-realistic android named <a href="https://robots.ieee.org/robots/erica/" target="_blank" rel="noreferrer noopener">Erica</a>, and tested the robot on 132 volunteers.</p> <div class="newsletter-box"> <div id="wpcf7-f6-p214084-o1" class="wpcf7" dir="ltr" lang="en-US" role="form"> </div> </div> <p>Participants listened to one of three different types of dialogue with Erica: one where she was using the shared laughter system, one where she didn’t laugh at all, and one where she always laughed whenever she heard someone else do it.</p> <p>They then gave the interaction scores for empathy, naturalness, similarity to humans, and understanding.</p> <p>The researchers found that the shared-laughter system scored higher than either baseline.</p> <p>While they’re pleased with this result, the researchers say that their system is still quite rudimentary: they need to categorise and examine lots of other types of laughter before Erica’s chuckling naturally.</p> <p>“There are many other laughing functions and types which need to be considered, and this is not an easy task. We haven’t even attempted to model unshared laughs even though they are the most common,” says Inoue.</p> <p>Plus, it doesn’t matter how realistic a robot’s laugh is if the rest of its conversation is unnatural.</p> <p>“Robots should actually have a distinct character, and we think that they can show this through their conversational behaviours, such as laughing, eye gaze, gestures and speaking style,” says Inoue.</p> <p>“We do not think this is an easy problem at all, and it may well take more than 10 to 20 years before we can finally have a casual chat with a robot like we would with a friend.”</p> <p><img id="cosmos-post-tracker" style="opacity: 0; height: 1px!important; width: 1px!important; border: 0!important; position: absolute!important; z-index: -1!important;" src="https://syndication.cosmosmagazine.com/?id=214084&amp;title=Realistic+androids+coming+closer%2C+as+scientists+teach+a+robot+to+share+your+laughter" width="1" height="1" /></p> <div id="contributors"> <p><em><a href="https://cosmosmagazine.com/technology/robot-laugh/" target="_blank" rel="noopener">This article</a> was originally published on <a href="https://cosmosmagazine.com" target="_blank" rel="noopener">Cosmos Magazine</a> and was written by <a href="https://cosmosmagazine.com/contributor/ellen-phiddian" target="_blank" rel="noopener">Ellen Phiddian</a>. Ellen Phiddian is a science journalist at Cosmos. She has a BSc (Honours) in chemistry and science communication, and an MSc in science communication, both from the Australian National University.</em></p> <p><em>Image: Getty Images</em></p> </div>

Technology